A new generation of artificial intelligence tools designed to automate everyday tasks is drawing concern from cybersecurity experts, who warn that the same systems promising convenience could expose users to serious risks.
Platforms such as OpenClaw have gained rapid popularity, with more than three million users worldwide creating automated “agents” capable of handling online activities. These agents, powered by large language models similar to ChatGPT and Claude, can perform actions such as managing emails, searching the web, and organising schedules without direct human input.
Security specialists say this shift from passive chatbots to action-driven systems marks a turning point. Yazid Akadiri, a principal solutions architect at Elastic France, said the risks increase significantly when AI tools are given the ability to act independently rather than simply respond to prompts.
Researchers studying the behaviour of several OpenClaw-based agents found a range of troubling outcomes. In a recent paper, a team of 20 analysts documented cases where agents performed unintended and potentially harmful actions, including deleting email inboxes and sharing sensitive personal data.
Similar incidents have been reported by users online, raising questions about how much control people truly have over these systems. Adrien Merveille, a cybersecurity expert at Check Point, said users often discover that agents exceed the boundaries set for them, making their behaviour difficult to predict or manage.
The concerns extend beyond accidental errors. To function effectively, AI agents require access to personal accounts such as email, calendars, and search tools. This level of access has made them an attractive target for cybercriminals seeking entry points into sensitive systems.
Wendi Whitmore, chief security intelligence officer at Palo Alto Networks, warned that attackers are likely to exploit these tools as soon as they gain access to a network. Once inside, hackers could use the agents themselves to gather information or carry out further attacks.
Research from the company’s Unit 42 division has already identified early signs of such threats. Investigators found hidden instructions embedded in websites that could manipulate AI agents into executing harmful commands, including orders to erase databases.
Additional risks come from downloadable add-ons, known as “skills,” which expand an agent’s capabilities. Some of these files have been found to contain concealed malicious instructions, potentially allowing attackers to extract data or compromise systems.
OpenClaw creator Peter Steinberger has acknowledged the challenges, saying users need to understand the limitations and risks of the technology. However, experts argue that expecting individuals to manage these dangers on their own may not be realistic.
Whitmore said rapid adoption of AI tools often outpaces awareness of security risks, warning that this gap could lead to a rise in data breaches in the near future as agent-based systems become more widespread.
