AI in the Workforce: Why Security Measures Lag Behind Autonomous Agents, ETCISO #AI


Conventional software implementation typically involves systems that are engineered to follow a fixed, predefined set of rules. By contrast, adding an AI agent is akin to adding a new member to the workforce—one that can tap into sensitive information, make high-impact decisions, and act within a functional scope. The difference here is that this ‘employee’ not only interfaces directly with APIs but can initiate automated workflows and function at a machine-level pace.

Existing security architectures were not designed to handle an autonomous digital participant. AI agents need both the identity governance applied to human users and the control frameworks used for software systems. This introduces an entirely new category of risk. Preparing the enterprise for it requires us to acknowledge AI agents for what they are: a new class of internal digital actors that combine human-like autonomy with broad system-level access.

From conversational interfaces to independent systems

In the earlier phase of enterprise AI adoption, usage was largely prompt-driven. A user would submit a query, the system would return an output, and a human would determine the next step. Risk considerations focused on known chatbot issues—fabricated responses, prompt manipulation, boundary bypassing, and unintended data exposure. Importantly, a human remained the final gate before any real-world action occurred.

That paradigm is now being redefined. In the agentic phase, AI systems can independently query databases, orchestrate multiple tools, combine datasets in unforeseen ways, and even trigger downstream processes. All these processes can be expertly executed without human validation at each step. Therefore, the risk has now shifted from generated responses to what these systems can interact with and operationalise.

Across many organisations, these systems operate using shared or system-level credentials that carry far more access than their specific function demands. This necessitates a shift from purely software-centric security toward a broader focus on identity control and access governance.

Where agentic systems create hidden exposure

As enterprises link AI agents together, they can call each other’s APIs, move across environments, and operate under varying identity contexts. Therefore, the risk multiplies manifold. Execution paths become unpredictable. For instance, an agent designed to summarise sales data could, through a chain of tool interactions, end up reaching systems and datasets far outside its intended scope.

Model-level safeguards do not address this kind of exposure. The real risk lies beyond the model itself – in the interactions between systems. It emerges in handoffs between agents, in the absence of explicitly assigned permissions, and in fragmented ownership where no single team has end-to-end visibility.

This exposure is partly intentional. Since agent workflows are inherently non-linear, teams often grant expansive access to avoid limiting performance. The same adaptability that makes agents effective also makes them difficult to control.

Traditional security models assume that access can be predefined, and behaviour reviewed retrospectively. However, AI agents invalidate both these assumptions. Their access requirements shift dynamically, influenced by context. Their execution patterns are difficult to forecast. Meanwhile, since logging systems across diverse platforms are often disjointed, they do not offer any unified understanding of what agents are doing or why.

The outcome is a steadily expanding set of blind spots. When these blind spots operate at computational speed, they represent a fundamentally different level of risk compared to human-scale gaps.

What security teams need to prioritise now

The foundations for managing this risk already exist within most security programs. But the difficulty lies in adapting them to this modern digital entity. Some considerations here are:

  • Recognise and manage AI as non-human identities: Every AI component, be it agents, connectors, and service accounts, must be assigned unique, traceable credentials. While this may seem straightforward, many organisations lack a comprehensive inventory of AI entities in their environment. Without that baseline, governance is impossible.
  • Implement strict least-privilege access and sustain it: Agents should only be granted minimum access necessary to perform their designated function. If an agent operates on a limited schedule, its access should reflect that. Given the tendency to over-provision due to uncertainty, ongoing monitoring for deviations is essential.
  • Treat access control as a continuous lifecycle: Permissions that were appropriate at launch can quickly become excessive as workflows evolve. Continuous access reviews and behavioural anomaly detection are necessary to maintain effective controls over time.
  • Understand and control system interconnections: Clearly document API interactions, data flows, and integrations associated with AI agents. Enforce consistent, identity-aware policies across all environments they interact with. Without this, permission drift becomes inevitable, and in an agentic ecosystem, drift translates into unintended and accumulating access.
  • Build unified visibility across environments: Logging and monitoring capabilities should be a core requirement when adopting any AI platform. Systems should be able to consolidate activity data into a centralised layer, enabling consistent analysis and oversight.

As agents become more autonomous, organisations must develop true AI observability. They need visibility not only into the agent’s actions, but also the rationale behind them. An audit log that merely records execution is insufficient. Security teams need insight into the decision pathways, data interactions, and tool usage that informed those actions.

Reframing AI as a high-privilege internal entity

The next generation of AI-related security breakdowns is unlikely to stem from external adversaries. Instead, they will arise from agents operating with excessive permissions and limited oversight. This is the defining insider risk of the AI era. It is not intentional misuse, but a case of automation outpacing governance. As enterprises scale their use of agents to access data, integrate systems, and initiate workflows, even minor control gaps can escalate rapidly at orchestrated, real-time speed.

Addressing this risk requires a strong focus on identity, privilege management, and governance frameworks. Security leaders must have clear, continuous visibility into what agents can access, what actions they are capable of, and how these permissions change over time. If AI functions like a member of the workforce, then it must be governed with the same rigour.

The author is Rangarajan Srirangam, Senior RVP, Solution Engineering- India, Snowflake.

Disclaimer: The views expressed are solely of the author and ETCISO does not necessarily subscribe to it. ETCISO shall not be responsible for any damage caused to any person/organization directly or indirectly.

  • Published On Apr 21, 2026 at 08:53 AM IST

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

All about ETCISO industry right on your smartphone!






Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW