Rubrik has published survey findings that point to widening security gaps as companies adopt AI agents. The research is based on responses from more than 1,600 IT and security leaders.
The figures suggest many organisations are deploying autonomous systems faster than they can monitor or control them. Rubrik Zero Labs found that 86 per cent of respondents expect AI agents to outpace their organisation’s security guardrails within the next year.
Visibility appears limited. Only 23 per cent of respondents said they had full visibility into the agents operating in their environments, though the report noted that figure may itself be overstated.
The data also suggests many companies are not yet seeing the operational gains they expected. More than 80 per cent of respondents said AI agents require more manual oversight than the efficiency they deliver.
Recovery is another concern. The survey found that 88 per cent of respondents lacked the ability to roll back agent actions without disrupting systems, while nearly nine in ten were worried about meeting recovery objectives as agent-driven threats increase.
Identity risks
A central issue is identity governance. Non-human identities linked to agents are growing faster than many organisations can track or govern, creating what researchers described as a shadow workforce inside corporate systems.
These identities can carry persistent access with limited oversight, increasing the risk of misuse, compromise and lateral movement across networks. As AI agents begin making decisions, taking actions and interacting with sensitive data, the inability to secure those identities becomes a broader operational problem.
The survey also points to a changing threat landscape. Nearly half of respondents expect agentic systems to drive most attacks in the coming year, reflecting concern that attackers will use autonomous tools to increase the speed and scale of operations.
That marks a shift for security teams already balancing their own use of AI with the prospect of adversaries deploying similar systems. Autonomous tools can shorten response windows and make it harder to distinguish between insider risk and external compromise.
Control and recovery
The findings place control and resilience at the centre of corporate AI planning. Rather than treating AI deployment as a standalone technology project, the report argues that organisations need to tie adoption decisions to governance, recovery and operational safety.
Rubrik’s research combines survey data with technical analysis of attack paths across the tool, cognitive and identity layers of AI systems. The broader issue is no longer limited to breach prevention, but extends to maintaining control over systems that can act without waiting for human input.
The challenge is emerging as boards and senior executives push for wider AI adoption in customer service, software development, internal operations and security workflows. The survey suggests many companies are still building the basic oversight needed to understand what these systems are doing and how to reverse actions when something goes wrong.
One implication is that governance frameworks may need to catch up quickly with technical deployment. If organisations cannot see all active agents, track associated identities or safely undo actions, the gap between adoption and control could widen as use cases expand.
Industry debate over AI risk has often focused on model behaviour, data use and regulation. Rubrik’s findings place greater emphasis on operational questions inside companies: who or what has access, what an agent is authorised to do, and how quickly a business can recover if an automated process fails or is compromised.
Kavitha Mariappan, chief transformation officer at Rubrik, said companies are struggling because they have moved ahead with systems they cannot fully observe, govern or restore. “AI adoption is outpacing our ability to control it. Enterprises are struggling because they’ve deployed systems they can’t fully observe, govern, or restore,” she said.
Another view from the customer side focused on identity checks as a prerequisite for broader use. “Identity verification is the fundamental underpinning that will allow us to get the greatest automation benefits of AI without imposing human bottlenecks. Verification and visibility are prerequisites for sound, secure agentic implementation,” Ramirez said.
Mariappan said the issue now extends beyond a general discussion of AI risk. “We have to move past the debate of whether AI is risky and address the harder reality: as decision-making shifts from human to machine, the critical challenge for every leader is maintaining operational safety in an increasingly autonomous landscape.”
Click Here For The Original Source.
