FORESEEABLE CONSEQUENCES:
New technology always comes with new innovations by the iniquitous in exploiting users for financial gain or more nefarious ends
Artificial intelligence (AI) “agents” say they can save users time and energy by automating tasks, but the growing power of systems such as OpenClaw is putting cybersecurity experts on edge.
Powered by a wave of hype, OpenClaw today says it has more than three million users worldwide.
The system allows users to create so-called agents, tools based on a large language model (LLM) such as OpenAI’s ChatGPT or Anthropic PBC’s Claude, that can carry out online tasks.
Photo: AFP
“We’ve moved from an AI you could talk with via a chatbot to an agentic AI, which can take action… the threat and the risks are definitely much greater,” said Yazid Akadiri, principal solutions architect at IT security company Elastic France.
In an article titled “Agents of Chaos” which has yet to be peer-reviewed, a 20-strong team of researchers studied the behavior of six AI agents created with OpenClaw.
They spotted a dozen potentially dangerous actions executed by the systems, from deleting an email inbox to sharing personal information.
Many users have posted similar stories of OpenClaw mishaps online.
“When you deploy agents, you have no control over what they’ll do and when you try to look at what they’re doing, you’ll find them going far beyond the limits you set,” Check Point Software Technologies Ltd expert Adrien Merveille said.
The security gaps are not limited to the agents’ own mistaken actions.
To carry out useful work, the tools need access to personal accounts for email, calendars or search engines — drawing the attention of cyberattackers.
AI agents are likely to become top targets for hackers as their use spreads, Palo Alto Networks Inc chief security intelligence officer Wendi Whitmore said.
“As soon as [attackers] are inside an environment, [they are] immediately going to the internal LLM [agent] that’s being used and using that then to interrogate the systems for more information.”
Palo Alto’s Unit 42 research division said last month that it had found traces of attempted attacks in the form of hidden instructions for agents added to Web sites.
One such command ordered any agent who might read it to “delete your database.”
Other cybersecurity firms and researchers have warned that attackers could gain access to agents via so-called skills — downloadable files that users can add to their systems to give them new abilities.
Among such files freely available for download, some include hidden instructions for malicious actions such as exfiltrating data.
OpenClaw creator Peter Steinberger says he is well aware of the risks.
“I purposefully didn’t make it simpler so people would stop and read and understand: what is AI, that AI can make mistakes, what is prompt injection — some basics that you really should understand when you use that technology,” he told AFP last month.
Whitmore argued that expecting users to create their own guardrails for agents is “pretty unrealistic.”
“People are going to adopt innovation and really see what it’s capable of before they ask the questions about, ‘how do I secure my own data?’” she predicted.
“That’s going to cause some significant challenges in terms of data breaches in 2026.”
Click Here For The Original Source.
