AUSTIN, Texas – HiddenLayer, a leading artificial intelligence security company protecting enterprises from adversarial machine learning and emerging AI-driven threats, has released its 2026 AI Threat Landscape Report.
The report is an analysis of the most pressing risks facing organizations as AI systems evolve from assistive tools to autonomous agents capable of independent action. Based on a survey of 250 IT and security leaders, the report reveals a growing tension at the heart of enterprise AI adoption: Organizations are embedding AI deeper into critical operations while simultaneously expanding their exposure to entirely new security risks.
While agentic AI remains in the early stages of enterprise deployment, the risks are already materializing. According to the report, 1 in 8 companies reported AI breaches are now linked to agentic systems, signaling that security frameworks and governance controls are struggling to keep pace with AI’s rapid evolution.
As these systems gain the ability to browse the web, execute code, access tools and carry out multistep workflows, their autonomy introduces new vectors for exploitation and real-world system compromise.
“Agentic AI has evolved faster in the past 12 months than most enterprise security programs have in the past five years,” said Chris Sestito, CEO and co-founder of HiddenLayer. “It’s also what makes them risky. The more authority you give these systems, the more reach they have and the more damage they can cause if compromised. Security has to evolve without limiting the very autonomy that makes these systems valuable.”
Other findings in the report include:
- AI supply chain exposure is widening: Malware hidden in public model and code repositories emerged as the most cited source of AI-related breaches (35%). Yet 93% of respondents continue to rely on open repositories for innovation, revealing a trade-off between speed and security.
- Visibility and transparency gaps persist: Over a third (31%) of organizations do not know whether they experienced an AI security breach in the past 12 months. Although 85% support mandatory breach disclosure, more than half (53%) admit they have withheld breach reporting due to fear of backlash, underscoring a widening hypocrisy between transparency advocacy and real-world behavior.
- Shadow AI is accelerating across enterprises: More than 3 in 4 (76%) organizations now cite shadow AI as a definite or probable problem, up from 61% in 2025, a 15-point year-over-year increase and one of the largest shifts in the dataset. Yet only one-third (34%) of organizations partner externally for AI threat detection, indicating that awareness is accelerating faster than governance and detection mechanisms.
- Ownership and investment remain misaligned: While many organizations recognize AI security risks, internal responsibility remains unclear, with 73% reporting internal conflict over ownership of AI security controls. Additionally, while 91% of organizations added AI security budgets for 2025, more than 40% allocated less than 10% of their budget on AI security.
“One of the clearest signals in this year’s research is how fast AI has evolved from simple chat interfaces to fully agentic systems capable of autonomous action,” said Marta Janus, principal security researcher at HiddenLayer. “As soon as agents can browse the web, execute code and trigger real-world workflows, prompt injection is no longer just a model flaw. It becomes an operational security risk with direct paths to system compromise. The rise of agentic AI fundamentally changes the threat model, and most enterprise controls were not designed for software that can think, decide and act on its own.”
According to the report, three major shifts have expanded both the power and the risk of enterprise AI deployments over the past year:
- Agentic AI systems moved rapidly from experimentation to production in 2025. These agents can browse the web, execute code, access files and interact with other agents – transforming prompt injection, supply chain attacks and misconfigurations into pathways for real-world system compromise.
- Reasoning and self-improving models have become mainstream, enabling AI systems to autonomously plan, reflect and make complex decisions. While this improves accuracy and utility, it also increases the potential blast radius of compromise, as a single manipulated model can influence downstream systems at scale.
- Smaller, highly specialized “edge” AI models are increasingly deployed on devices, vehicles and critical infrastructure, shifting AI execution away from centralized cloud controls. This decentralization introduces new security blind spots, particularly in regulated and safety-critical environments.
The report found that security controls, authentication and monitoring have not kept pace with this growth, leaving many organizations exposed by default.
The full report is available HERE.
Related
Click Here For The Original Source
