Delinea has warned that non-human identities, including artificial intelligence agents, are emerging as a major security risk for large organisations.
The warning comes as security vendors and experts mark Identity Management Day by highlighting weaknesses in how companies govern digital identities.
Vendors and security leaders increasingly describe identity as the central control layer for enterprise security. They argue this now applies across a growing mix of on-premise systems, cloud services, software-as-a-service applications and AI-driven automation.
Art Gilliland, chief executive officer of Delinea, said many organisations underestimate the risk of treating AI software as ordinary tools rather than powerful users with elevated access.
“Identity doesn’t stop at people. Non-human identities, particularly AI agents, are quickly becoming one of the biggest sources of enterprise risk. Despite 87% of organizations claiming they’re ready for AI-driven automation at scale, nearly half admit their identity governance for AI systems falls short.
The problem is relatively simple, but often overlooked: teams are still treating AI agents as tools, when they actually behave like privileged users. This creates the ‘AI security paradox’, where organizations are scaling their AI initiatives faster than they control which identities get access to what. Dangerous blind spots can form as a result, hiding unchecked privilege, quiet access paths, and little accountability for actions.
The pressure to move fast on AI is real, but so is the need to lock down identities. As AI agents continue to multiply across enterprise environments, identity can’t be viewed as just another part of security; it must be treated as the overarching control plane.”
Over the past decade, security teams have strengthened controls on human users such as employees and contractors. They now face a surge in identities tied to applications, service accounts, machine identities and AI agents that connect across multiple platforms.
Analysts say many organisations still lack a consistent framework spanning both human and non-human identities. Responsibility for AI projects also often sits with business units or data science teams, while identity governance remains with central IT or security teams.
That separation can create gaps in policy and oversight. It can also delay the detection of inappropriate access by automated systems that operate at far greater speed and scale than human users.
The tension between rapid AI adoption and security control is a recurring theme among security leaders. They describe a pattern in which business units deploy automation, integrate it with sensitive systems and extend access, while central governance and monitoring lag behind.
Delinea argues that AI systems should be treated like privileged users: accounts that can change configurations, move data or access sensitive resources. Security teams typically apply strict controls to such access, including logging, approval workflows and least-privilege policies.
When organisations fail to treat AI agents as privileged identities, those controls can weaken. It can also make forensic investigations more complex when something goes wrong.
Cameron Matthews, chief information security officer at Radiant Logic, said fragmented identity data across large environments remains a structural problem for many enterprises.
“Identity Management Day is a timely reminder that identity has become the primary control plane for modern security, especially as organizations expand across cloud, SaaS, and now AI-driven environments. The challenge is that most enterprises are still operating with fragmented identity data, making it difficult to see who has access to what, and whether that access is appropriate or risky.
This lack of visibility creates blind spots that attackers increasingly exploit, particularly as non-human identities and automated processes multiply. To address this, organizations need to move beyond static identity governance and embrace continuous identity observability that provides real-time insight into access, behavior, and risk. Ultimately, treating identity as a dynamic, data-driven layer of security is imperative to enable Zero Trust to function as intended in today’s environment.”
Both Gilliland and Matthews identify blind spots as a central concern. In many environments, security teams cannot easily answer basic questions about which identities exist, what access they hold and how that access changes over time.
Identity specialists argue that the problem worsens as organisations move deeper into multi-cloud and AI-driven architectures. Each new platform and automation layer introduces its own credentials, tokens and service accounts.
Zero Trust strategies place identity and access decisions at the centre of enterprise security design. Their success depends on accurate, real-time insight into identities and entitlements.
Vendors are also promoting concepts such as continuous identity observability and unified identity data layers as foundations for reducing risk from both human and non-human users.
Identity Management Day has increasingly become a focal point for these discussions. Security leaders use the occasion to encourage internal reviews of account hygiene, privilege design and monitoring for suspicious behaviour.
The rise of AI agents across business workflows is adding urgency to those reviews. Identity specialists warn that traditional static governance models, built around infrequent certifications and manual reviews, no longer match the pace of automated activity in many enterprises.
Gilliland and Matthews both frame identity as the organising layer for modern security strategies rather than a supporting function. They also agree that non-human identities and AI-driven automation now sit at the centre of that discussion.
Click Here For The Original Source.
