IBM Highlights Security Gaps In Emerging Agentic AI Systems #AI


IBM is highlighting significant security vulnerabilities within the rapidly developing field of agentic artificial intelligence, a concern underscored by discussions at last week’s RSA cybersecurity conference, which drew over 43,000 attendees. While hundreds of vendors showcased agentic AI security solutions, a comprehensive approach to securing these dynamic systems appeared to be lacking; Suja Viswesan, Vice President for Security Products at IBM, observed that very few vendors spoke of end-to-end solutions. This gap in holistic security is particularly concerning given that AI agents, unlike static code, change behavior at runtime, creating new avenues for attack. Bob Kalka, Global Lead for Security Sales at IBM, said the entire conference felt focused on agentic AI, signaling the urgency for organizations to address these emerging threats, especially as a new IBM Institute for Business Value study indicates companies with a coordinated, multi-agent strategy expect a 42% higher ROI compared to those without.

RSA Conference Highlights Agentic AI Security Gaps

The RSA Conference, drawing over 43,000 attendees, signaled a shift in cybersecurity priorities; agentic AI dominated discussions on the expo floor, according to industry observers. Despite the widespread attention, a cohesive strategy for securing these complex systems remained elusive, with end-to-end orchestration conspicuously absent from many presentations. Tools like IBM Verify and HashiCorp Vault are emerging as potential components of a more holistic strategy, addressing both human and agentic identity security. Recent data underscores the need for robust agent security; the latest Cost of a Data Breach report reveals that 97% of organizations experiencing AI-related incidents lacked dedicated AI access controls.

Jake Lundberg, HashiCorp Field CTO, highlighted a common challenge he encounters with clients: many do not have a clear understanding of the scope of their identities, and they struggle to verify those identities are functioning as expected. He advocates for “ring-fencing” identities and workflows, especially in highly regulated sectors like finance and healthcare, where a single compromise can be rapidly amplified by autonomous AI agents, while acknowledging some companies are adopting a permissive approach to deployment.

97% of Organizations Lack AI-Specific Access Controls

The RSA cybersecurity conference, recently exceeding pre-pandemic attendance with over 43,000 participants, highlighted a growing concern: the security of increasingly autonomous AI agents. This widespread deficiency underscores a fundamental mismatch between the rapid deployment of AI and the maturity of security protocols designed to govern it, creating a significant vulnerability as agents dynamically alter behavior and collaborate across networks. Experts emphasize that traditional identity and access management frameworks are ill-equipped to handle the unique challenges posed by AI agents, which represent a new kind of identity requiring specialized oversight. Jake Lundberg, Vice President for Global Cyber Threat Management, noted that end-to-end orchestration for securing these agentic systems was not a major theme at RSA, with most vendors offering only isolated solutions rather than comprehensive platforms. He said the fundamental pieces needed to protect an environment are the ability to quickly stand up and change identities in the event of a problem.

“Almost every one of hundreds of vendors on the expo floor were talking about agentic AI security.”

Bob Kalka, Global Lead for Security Sales at IBM



Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW