While AI reduces some coding flaws, credential sprawl accelerates, expanding the non-human identity attack surface, and making remediation the new security bottleneck.
AI is changing software development faster than most security teams can adapt. As coding assistants and autonomous agents become embedded in daily workflows, many assume traditional application security controls will steadily lose relevance. If machines can scan code, catch flaws, and even suggest safer alternatives in real time, then software risk should start to shrink.
But that’s not what is happening in the real world, according to GitGuardian’s security research.
The battle isn’t in the code anymore, because AI is shifting where the control point is. It’s in the credentials, tokens, service accounts, and machine identities that AI systems need in order to access data and take action.
This matters because the attack surface has fundamentally changed. AI-assisted commits grew exponentially in 2025 and leaked secrets at roughly double the baseline rate. But the bigger shift is where those credentials now live: on developer machines running local agents, in MCP config files, across fragmented AI service stacks, and inside CI/CD runners where AI tooling is increasingly concentrated. Code is just where it starts. The exposure happens everywhere those credentials actually get used.
The fundamental security truth hasn’t changed: attackers don’t break in, they log in. What’s changed is the number of ways they can get those credentials, the speed at which new ones are created, and the complexity of tracking what exists and where it’s deployed. That’s why secrets security and non-human identity governance are foundational.
AI Is Accelerating Secrets Sprawl
The State of Secrets Sprawl report shows just how quickly this problem is growing. In 2025 alone, 28.65 million new hardcoded secrets were detected in public GitHub commits, a 34% year-over-year increase, the largest single-year jump ever recorded in the report. At the same time, 1,275,105 leaked secrets were tied specifically to AI services, up 81% from the prior year. Eight of the ten fastest-growing categories of leaked secrets were linked to AI services or AI infrastructure.
This is a direct consequence of how AI development actually works. Teams are not integrating one model and calling it a day. They are assembling full AI stacks made up of model providers, retrieval services, orchestration layers, vector databases, inference gateways, monitoring tools, and local agent frameworks. Every new service introduces another token, another API key, another service account, and another place where those credentials can be copied, exposed, or forgotten.
The result is that the AI boom is also becoming a machine identity boom. And every machine identity expands the security surface.
Claude Code Proves the Point
Anthropic’s Claude Code is one of the clearest signals of where development is heading, and GitGuardian’s 2026 report offers an early measurement of its security implications. Monthly Claude Code co-authored commits grew from just 22 in January 2025 to 2.16 million by December. Across the full year, those commits represented only 0.4% of scanned public commits, but accounted for 0.9% of all detected leaks.
The more important figure is the leak rate. Across all public GitHub commits in 2025, 1.5% contained an exposed credential. The Claude Code co-authored commits leaked a secret 3.2% of the time, roughly double the baseline. At the peak in August, Claude Code-assisted commits reached 31 leaked secrets per 1,000 commits, about 2.4 times the human baseline.
To be clear, the trend improved later in the year. By December, the leak rate had fallen to 13 secrets per 1,000 commits, essentially converging with the human baseline. The report associates that improvement with better models, including Claude Sonnet 4.5, and with more mature usage patterns.
This is encouraging, but it does not mean the problem is solved. Claude-assisted commits were also significantly larger on average for much of the year, which means more code, more generated context, and more room for a credential to slip through a single review unit. The lesson is not that AI coding assistants are inherently insecure. It is that faster software generation also creates faster credential exposure unless organizations adapt their controls around it.
The Overlooked AI Attack Surface Is the Developer Machine
And the problem does not stop at the repository. As AI-assisted development becomes more tool-driven, the attack surface extends to the places where those credentials are actually stored and used, especially the developer endpoint.
One of the biggest mistakes in software supply chain security is to think secrets only leak from source code repositories. In reality, the developer workstation is now a dense concentration point for credentials, context, and machine identities.
That matters even more in the AI era. Developers are writing code locally, but also running agents, local MCP servers, CLI tools, IDE extensions, build pipelines, container tooling, and retrieval workflows, all of which need credentials to function. Those credentials do not live neatly in one place. They spread across .env files, shell profiles, terminal history, IDE settings, cached tokens, build artifacts, local config files, and automation scripts. A single secret can end up replicated across a machine many times over, creating multiple theft paths from one original credential.
GitGuardian’s analysis of the Shai-Hulud 2 supply chain attack offers a rare empirical view into what that looks like on real development endpoints. Across 6,943 compromised machines, the research identified 294,842 secret occurrences, corresponding to 33,185 unique secrets, with at least 3,760 still valid at the time of analysis. On average, each live secret appeared in roughly eight different locations on the same machine. That means the problem is duplication. Even if one copy is removed, several others may still exist locally.
The distribution is just as revealing. About 44% of compromised machines held more than 10 secrets, and 5% contained more than 100. GitHub tokens dominated the validated set, including 581 personal access tokens, 386 OAuth tokens, 104 fine-grained PATs, and 101 GitLab tokens. They can enable repository access, workflow manipulation, and lateral movement across the software supply chain.
This was not just a workstation problem. GitGuardian found that 59% of the compromised machines were CI/CD runners rather than personal developer laptops. That changes the risk model entirely. Once secrets sprawl into the build infrastructure, they stop being an individual hygiene issue and become an organizational exposure problem. A leaked credential on a runner can unlock shared pipelines, deployment paths, package publishing flows, or cloud-connected automation used by multiple teams.
More recently, the LiteLLM supply chain attack demonstrated the same pattern, with compromised packages harvesting SSH keys, cloud credentials, and API tokens from developer machines where AI development tools are increasingly concentrated.
This is why developer endpoint security deserves a central place in the AI security conversation. As AI coding assistants and local agent frameworks become part of daily development, the endpoint turns into the place where credentials, generated code, context windows, and operational tooling all meet. Protecting only the repository is no longer enough. Security teams need controls on the machine itself, including earlier scanning, better secret hygiene in local workflows, tighter token lifecycle management, and protections for the laptop and runner as first-class parts of the attack surface.
The Security Model Has Changed
The industry is spending a lot of time debating whether AI will make traditional code security less valuable. That debate misses the most important shift. The real problem is no longer just what gets written into code. It is what AI systems need in order to operate across codebases, infrastructure, SaaS tools, and developer environments.
That’s why secrets security, non-human identity governance, and developer endpoint protection are becoming central to AI-era defense. AI may help reduce some coding flaws, and in some cases, it even helps rotate exposed credentials. But the hard part is still remediation in the real world, across real systems, with real operational dependencies.
The organizations that adapt fastest will be the ones that stop treating secrets as a narrow AppSec issue. In the AI era, credentials are the connective tissue between software, automation, and action. Protecting them is no longer a secondary control. It is the security model.
About the Author: Eric Fourrier is the CEO of GitGuardian, an end-to-end NHI security platform for enterprises. GitGuardian helps you take control of your NHI security by discovering all your secrets, prioritizing and remediating leaks at scale, ultimately protecting your non-human identities, and reducing breach exposure. Widely adopted by developer communities, GitGuardian is used by over 600 thousand developers and leading companies, including Snowflake, Orange, Iress, Mirantis, Maven Wave, ING, BASF, and Bouygues Telecom.
Eric Fourrier — CEO at GitGuardian
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiE2ymXrOVYRjypLH3WgMPaCZ2MaHIv0BqNNKLLaFZUuvZfV06FByqx6ZShBinGHf4pofgkhgw0C409bPiKoonIXzPmxEhDorLiaSzixSr98NJ4zGsPofA_1I1ml1-IwzOUj-5sCyU2Y8dFrEQ5a1zJH0w_ENxbe3javD3SYBObqErUuQj28_Z3EeVwnKg/s728-rw-e365/eric.png
Click Here For The Original Source.



