As artificial intelligence shifts from a behind-the-scenes tool to something that actively makes decisions, the question facing companies is how much AI can be trusted. In this interview with AI Journal, Ankush Gupta explains why that shift is forcing a fundamental rethink of cybersecurity. A senior solution architect and cybersecurity strategist, Gupta has spent the past several years operating at the intersection of AI, zero-trust architecture, and secure automation, building systems that are not only scalable but also defensible. His experience spans telecom, fintech, and large-scale enterprise environments, where the stakes are high and the margin for error is thin.
Gupta is also the creator of FOZTMA-CS, a zero-trust maturity framework designed for a world where AI systems are no longer passive. Drawing from his background in advanced engineering and intelligent systems, as well as his research in areas like explainable AI and distributed trust, he focuses on a core problem many organizations are only beginning to confront: what happens when the “user” inside your system is no longer human, but an autonomous agent capable of reasoning and acting on its own.
Before we get into the technical side, can you share how you originally got into AI and cybersecurity, and what drew you to focus on trust, security, and explainability?
I started exploring the opportunities in AI, as cybersecurity is one of the areas I have been working in for the past 5+ years, and there were various initiatives at the enterprise level on AI. I spoke to my leaders and got the great opportunity to build a product using artificial intelligence and machine learning.
Because there is significant data exposure in artificial intelligence, I recognize the need for security to protect customers’ PII and business transaction details. On the organization side, we received guidelines for protecting customer data using a zero-trust model and began focusing on security.
There’s a lot of talk right now about AI moving from passive tools to systems that actually make decisions and take action. What changes when AI becomes more agentic and autonomous in your view?
Yes, there is a lot of buzz about AI nowadays. Of course, smart systems are built on AI, which enables them to understand behavior and act.
Some examples include the SOC, which evolves into an autonomous cyber defense system. Also, agentic SOC, which includes AI agents, collects, correlates, decides, executes, and transforms the system behavior.
The SOC becomes a mission control center, not a ticket queue, and the attack surface changes because AI can act. Once AI can take actions, it isolates hosts, rotates keys, blocks traffic, and modifies IAM policies, thereby expanding the risk surface.
In addition to those new attack vectors, agent hijacking (indirect prompt injection, poisoned logs, manipulated telemetry), Toolchain abuse (agents misusing API permissions), and memory poisoning. Autonomous lateral movement (if an agent is compromised) is an example of this, , which is why agent identity, least privilege, and Zero Trust for AI become mandatory.
I would like to continue to shed light on the non‑obvious shift in which AI becomes both a defender and an attack surface. As AI becomes central to defense, it also becomes a high-value target, a new attack surface, and a new insider risk we need to address.
From your experience, where do things typically break down when security and explainability are treated as afterthoughts rather than built in from the start?
When security and explainability are bolted on at the end instead of designed in from day zero, the system doesn’t just become weaker; it becomes fundamentally unstable. In real-world AI and cybersecurity training, failures are predictable, repeatable, and often catastrophic. Here’s where things reliably go wrong.
- The data pipeline becomes a single point of failure
- Models become “black boxes” you can’t defend
- The system fails under real-world adversarial pressure
- Security controls don’t match the AI’s real behavior
- You can’t prove what the AI did or why
- Human oversight loops break down
- Retrofitting controls becomes technically impossible
When FOZTMA-CS is applied to AI agent environments, it becomes the governance layer for autonomous cyber defense. Agents are treated as first-class identities with defined trust boundaries, shifting Zero Trust from access control to what an autonomous actor is allowed to decide and do.
Identity becomes dynamic, blending cryptographic identity with behavioral patterns. Objectives become goal boundaries and constraints, while zoning defines autonomy levels from read-only to controlled execution. Trust is continuous, with real-time validation and drift detection, recognizing that AI agents are both decision-makers and attack surfaces that require strict oversight.
When an AI system is making real decisions in production, how do you make sure people can actually understand why it did something, not just accept the output?
We make AI decisions understandable by understanding the system so that the reasoning itself becomes a first‑class artifact captured, it should be well organized and structured, governed, and reviewable, not something guessed after the fact.
In high-stakes environments like telecom or financial systems, what does responsible AI actually look like day to day, beyond the buzzwords?
Responsible AI in telecom or financial systems is s a daily operational discipline. In high‑stakes environments, “responsible AI” shows up not in principles decks or ethics statements but in the repeatable, enforceable behaviors of the system and the humans running it.
Responsible AI in critical infrastructure means treating AI like a regulated, high-risk actor with identity, permissions, oversight, auditability, and continuous validation, not like a clever tool. In high‑stakes environments, humans don’t disappear from the loop. Daily oversight includes:
- Analysts reviewing AI‑generated explanations
- Supervisors approving high‑risk actions
- Engineers validating reasoning traces
- Compliance team sampling decisions for audit
In telecom and financial systems, we are long past the point where AI is a novelty. These environments underpin modern life: the networks that carry our conversations, the rails that move our money, and the systems that keep economies stable and societies connected. When AI enters these domains, it doesn’t enter as a toy. It enters as an actor. Responsible AI is in high-stakes environments; it is a daily operational discipline, a way of engineering, governing, and supervising intelligent systems that are now capable of making real decisions with real consequences.
As AI agents begin handling workflows autonomously, how should companies rethink identity, permissions, and monitoring to maintain control?
When AI agents begin initiating and completing workflows autonomously, identity, permissions, and monitoring become core control layers.. The model shifts from AI as a tool to AI as an operational actor, requiring a different security approach.
Each agent must have a distinct identity, operate within strict policy boundaries, and remain continuously observable. Identity becomes first class, permissions shift toward limiting autonomy, and monitoring focuses on behavior rather than just outputs.
Looking ahead a bit, what do you think will separate AI systems people truly trust from the ones they end up pulling back from?
No, in the future, that is not a possibility: trusted AI systems will not pull people back from their existing systems. The AI systems people truly trust in the next decade won’t be the ones that are the smartest, fastest, or most “human-like.” They’ll be the ones that are predictable, governable, observable, and aligned even under pressure, even when they’re wrong, even when the environment is adversarial.
