AI Security Incidents: 30% of Organisations at Risk, Finds Sprinto Report, ETCISO #AI


Nearly one in three organisations experienced a major AI-related security incident in the past 12 months, even as enterprises become more aware of AI-related risks and regulatory requirements, according to the Sprinto CISO Pulse Check.

The report, based on responses from 103 CISOs and security leaders, highlights a growing gap between AI adoption and enterprise readiness. While nearly 70% of organisations are actively tracking AI-related regulations, many continue to face challenges in enforcing policies, monitoring usage, and building scalable controls for emerging AI risks.

AI risk is now firmly on the enterprise agenda. More than half of the surveyed organisations have elevated AI into a dedicated risk category, signalling a shift away from treating it only as an extension of existing cybersecurity or data protection programmes. However, nearly 45% still manage AI risks under broader areas such as data security or third-party risk, suggesting that governance models are still evolving.

The report notes that regulatory expectations, internal risk assessments, and customer security reviews are among the key drivers pushing enterprises to act. However, only 15% of respondents cited board or executive leadership pressure as the primary driver, indicating that urgency at the leadership level is still developing.

Despite rising awareness, AI-related incidents are already affecting organisations. Respondents reported risks such as shadow AI usage, data leakage, model inversion, API abuse, unauthorised access, and data poisoning. These incidents reflect the growing complexity of securing AI systems, especially as employees and business teams adopt AI tools faster than internal controls can keep pace.

The report suggests that the most significant gap lies not in awareness, but in execution. Around 30% of organisations said they are less prepared to handle AI risks compared to traditional security risks. Two in three organisations take weeks to months to implement controls or policy changes, while 39% have AI usage policies that are not consistently enforced.

Monitoring also remains a weak area. Only 22% of organisations reported largely automated AI risk monitoring, with most still relying on semi-automated or manual processes. This limits response speed and makes it difficult for security teams to manage AI governance at scale.

Sensitive data exposure through public AI tools emerged as one of the biggest concerns for CISOs. Shadow AI usage continues to be a common source of incidents, often driven by easy access to publicly available tools and limited enforcement mechanisms.

At the same time, only 21% of organisations said they have controls in place to prevent sensitive data from being uploaded to public AI platforms. This creates a major risk, as once sensitive information is shared externally, it may be difficult to track, control, or reverse its exposure.

The report also finds that AI governance maturity remains uneven. Only 25% of organisations reported advanced maturity, while most continue to operate in developing or early stages. Many enterprises have introduced policies, but enforcement, monitoring, and cross-functional ownership remain limited.

However, AI governance is beginning to attract dedicated investment. According to the report, 69% of organisations have already allocated budgets for AI risk mitigation in 2026, while another 17% plan to do so in the next budget cycle.

Over the next 12 months, enterprises are expected to focus on implementing technical controls, conducting AI risk assessments, and training employees on safe AI usage. These priorities suggest that AI governance is moving from a compliance-led discussion to a more operational security priority.

The report underlines a fundamental mismatch: AI adoption is fast, decentralised, and continuous, while governance in many organisations remains slow and fragmented. To address this, enterprises will need to move beyond static policies and manual processes.

AI risk management, the report suggests, must become continuous, embedded into workflows, and adaptive to new tools and use cases. In AI-led environments, governance is no longer only about control. It is about keeping pace with how AI is being adopted across the enterprise.

“AI has moved faster than most organisations were prepared for. What began as a productivity advantage has quickly become a governance challenge. The companies that win in 2026 will not be the ones adopting AI fastest, but the ones building trust, control, and resilience at the same speed,” said Raghuveer Kancherla, Co-founder, Sprinto.

The findings point to a clear enterprise challenge: organisations understand AI risk, but many are not yet equipped to manage it at scale. As AI becomes more deeply embedded into business workflows, governance will play a defining role in how enterprises balance innovation, speed, compliance, and security.

  • Published On Apr 24, 2026 at 08:26 AM IST

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

All about ETCISO industry right on your smartphone!






Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW