Why Data Trust Is Key to AI Success #AI


CISO Insights Reveal Gaps Between AI Adoption Speed and Data Security Maturity


April 14, 2026    

Image: Shutterstock

There’s a visible disconnect between organizations’ rapid implementation of artificial intelligence and foundational security.

See Also: AI Impersonation Is the New Arms Race-Is Your Workforce Ready?

MIND’s latest research, “The Impact of Data Trust on AI Success,” revealed that 90% of enterprises are already operating enterprise generative AI at scale. But 65% of the CISOs overseeing these deployments lack confidence in the efficacy of their current AI data security controls. Only one in five AI initiatives is currently achieving its intended key performance indicators. The speed of technological deployment is outpacing the maturity of the underlying security infrastructure.

The research identifies, through surveys and interviews with over 100 CISOs, seven interconnected insights that clarify why a lack of data trust is a primary factor stalling or introducing risk into AI programs.

Insight 1: There Is a Wide Gap Between Visibility and Enforcement

Most organizations have established AI policies, governance frameworks, acceptable use policies and AI councils. But the critical gap lies in effectively enforcing these policies at machine speed.

Governance is not the issue. The rules are already defined. The real issue is the lack of technical mechanisms needed to enforce them at the pace of business.

This lack of enforcement is a significant security risk:

  • 70% of CISOs struggle to enforce policies on gen AI tools;
  • 66% of CISOs are unable to enforce policies on AI agents;
  • 98% of CISOs are facing at least one AI security challenge.

Ultimately, governance without enforcement is just documentation. This distinction is vital because organizations that can’t enforce their policies are exposed while operating under the false belief that they’re protected.

Insight 2: Data Fundamentals Are Shaky and Impede AI Initiatives

For years, inadequate data security was manageable because no single system was capable of scanning all data at once. Unclassified files, overly shared folders and ungoverned data remained out of sight. The advent of AI has completely eliminated any protection that security by obscurity may have provided.

The instant an AI tool connects to a data source, it accesses everything within its reach, regardless of the information’s sensitivity or original purpose. The accumulated data debt from years of neglect immediately becomes visible, accessible and vulnerable to exploitation. Sixty-five percent of CISOs are unaware of what data is being used as input for AI, and 68% don’t know what data their AI agents are accessing.

“The core of the problem with advancing quickly with AI is that nobody’s data was ready,” said Janet Heins, CISO at ChenMed.

Years of data sprawl can’t be fixed in a single rapid effort. But every day an AI tool operates on an unclassified, ungoverned data estate, it exposes risks that are currently invisible and unmanaged.

Insight 3: AI Doesn’t Behave Like a Human

Current enterprise security models were built to manage human behavior. Humans operate at a manageable pace and they can be trained, audited and held accountable, naturally exercising judgment about data sharing, even with broad permissions.

But AI agents inherit these same permissions but lack crucial judgment, resulting in agents accessing data beyond what is relevant. Alarmingly, 90% of organizations have granted broad data access to enterprise gen AI, yet 68% can’t determine what data their agents are accessing. Compounding this risk, 32% already have unknown agents operating within their environments.

The core issue is a structural mismatch: Security frameworks designed for human actions can’t natively address AI behavior. This fundamental disparity makes data trust a foundational layer, rather than an option for the AI era.

Insight 4: AI Initiatives Are Already Failing Due to a Lack of Data Trust

A significant challenge is preventing AI initiatives from achieving their KPIs. The core issue, as repeatedly highlighted in qualitative interviews, isn’t the AI model itself but the poor state of the underlying data. An unstable data foundation, marked by incomplete classification, unscanned storage and ungoverned access, leads to unreliable outputs and undetected failures.

This problem is exacerbated by a “measurement gap.” Most organizations focus on tracking AI utilization, such as tokens processed and prompts submitted, rather than AI outcomes. These metrics quantify activity, not the actual delivery of reliable business value. Without defining outcome-based KPIs before deployment, failures remain invisible, allowing them to escalate.

Insight 5: CISOs Want Early AI Initiative Involvement

AI adoption is universally business-driven, with CEOs, COOs and business unit leaders setting the pace. CISOs fully support this direction but face a unique challenge: AI risk cannot be easily quantified as a dollar figure or a probability percentage. Instead, it resides in system behaviors, data flows and emergent outputs that most executives are not equipped to evaluate directly.

“CISOs should advise on risk, not accept it. Risk ownership sits with the business,” said Tammy Klotz, CISO at Global Chemical Manufacturing Company.

CISOs who reported the strongest outcomes in this research shared a common approach. They positioned security not as a barrier to AI adoption but as the function that establishes the conditions for fast and confident adoption. When this framing is successful, security is involved early in program design. When it fails, business units bypass the function entirely and governance gaps inevitably widen.

Insight 6: AI Is a Stress Test of Security Fundamentals

AI doesn’t introduce new vulnerabilities, but it does accelerate those that already exist. Organizations that have neglected data classification, identity governance and enforcement fundamentals now face the consequences of these deficits at machine speed.

According to multiple CISOs, only an estimated 20% of organizations have the security maturity to safely implement AI at scale. For the remaining 80%, the consequences range from project failure and regulatory exposure to events serious enough to threaten organizational survival. This isn’t a future warning. It’s a description of conditions already in place.

Insight 7: High Data Trust Is a Competitive Accelerant

High data trust isn’t just about safety; it’s about speed. Clean, classified and governed data removes the friction that commonly stalls AI programs. Agents operate within known boundaries and enforcement becomes a capability, not a constraint. Security shifts from a checkpoint to a design partner.

This competitive advantage is already widening for organizations that are building their AI foundation now. Every new AI initiative runs on supportive infrastructure, every agent deployment is within a governed perimeter and every program failure is diagnosable.

In contrast, organizations with data debt see the opposite dynamic, where every new deployment increases exposure. The confidence numbers in this research already reflect the measurable gap between these two groups.

MIND isn’t just mapping this divide. The AI-native data security company is minding the conditions that help organizations move to the right side of it, giving security teams the visibility, governance and enforcement infrastructure that high data trust requires, so that AI can become the accelerant it’s meant to be.

The Path Forward

The insights underscored in this blog describe a single problem from seven different perspectives. The root cause is structural: a gap between the pace of AI adoption and the data foundation required to support it.

To learn what forward-looking CISOs recommend as the minimal viable security requirements for AI success, download the full report. Use the insights to inform your next AI program conversation. Your business is already moving, and this research gives you the evidence and language to guide its direction.



Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW