Agentic AI and India’s Nuclear Security: Threat, Opportunity, and Doctrine #AI


In early 2026, the United States military crossed a threshold that strategists had long theorised about but few anticipated so soon. Anthropic’s large language model Claude was deployed operationally by the Department of War (DOW) in two kinetic operations: the effort to apprehend Venezuelan President Nicolás Maduro, and the subsequent campaign of airstrikes targeting Iran. Embedded within Palantir’s Maven Smart System and cleared for DISA Impact Level 6 accreditation (the highest security authorisation within the DoW Cloud Computing Security Requirements Guide), Claude was tasked with synthesising satellite imagery, signals intelligence, and surveillance feeds to generate real-time targeting packages with precise GPS coordinates and strike prioritisation,  representing the first large-scale deployment of generative AI in a military campaign.

The episode exposed a governance paradox. Anthropic had long sought to preserve contractual red lines: prohibitions against fully autonomous lethal targeting and mass domestic surveillance. Despite this, operational use of the model continued throughout the campaign, outpacing both its creator’s governance frameworks and the Pentagon’s own administrative controls.

For the Indian security establishment, this sequence of events is not a peripheral development in American civil-military relations. It constitutes a signal of foundational importance regarding the character of contemporary and future warfare and demands an urgent recalibration of India’s national security posture.

Agentic AI’s Transformative Impact on the Battlefield

Conventional military AI functions as a decision-support instrument: it scans information, identifies anomalies, and presents recommendations, while a human operator retains authority at each consequential juncture – the human-in-the-loop phenomenon. Agentic AI is categorically different in its operational logic. It pursues defined objectives across multiple sequential steps, employs tools autonomously, self-corrects in response to changing conditions, and operates at computational speeds that structurally bypass human cognition.

Agentic AI is categorically different in its operational logic. It pursues defined objectives across multiple sequential steps, employs tools autonomously, self-corrects in response to changing conditions, and operates at computational speeds that structurally bypass human cognition.

China’s People’s Liberation Army (PLA) has constructed its entire military modernisation trajectory around this capability. Under President Xi Jinping’s directive to achieve “intelligentised warfare,” the PLA aims for full modernisation by 2035, with the military-civil fusion doctrine ensuring that commercial AI advances are systematically integrated into operational architecture. It has already deployed AI-powered drones and robotic platforms along the Line of Actual Control (LAC) facing India, enabling persistent surveillance of high-altitude terrain where sustained human deployment is untenable. This growing asymmetry in China’s autonomous capabilities poses “serious capability-related, operational, and logistical challenges” for India. Its potential transfer to Pakistan as part of the deep military tech cooperation with China compounds this. This scenario presents a two-front environment for India – one that its current pace of technology adoption is ill-equipped to address. Efforts such as Innovations for Defence Excellence are meaningful, but they are still at an early stage.

The Underestimated Imperative of Securing Critical Infrastructure

The most consequential dimension of agentic AI for Indian national security is not the kinetic battlefield but the targeting of critical national infrastructure before kinetic engagement commences. As has been seen in Ukraine and West Asia, critical infrastructure such as power generation plants, financial systems, telecommunications networks, and space assets often become the first-strike objectives whose degradation can substantially impair a nation’s capacity to respond and retaliate.

India has already experienced this scenario. In October 2020 (just five months after the Galwan Valley clash between the Indian Army and the PLA), a power outage crippled Mumbai, halting rail services, disrupting hospitals, and shutting stock markets. The outage was attributed to a Chinese state-sponsored group, RedEcho, which had established footholds in ten Indian power sector organisations, including four of five Regional Load Despatch Centres, consistent with pre-positioning for contingency operations rather than espionage alone. A similar incident occurred in April 2022, when the same threat actor targeted power grids in Ladakh.

The outage was attributed to a Chinese state-sponsored group, RedEcho, which had established footholds in ten Indian power sector organisations, including four of five Regional Load Despatch Centres, consistent with pre-positioning for contingency operations rather than espionage alone.

Within India’s critical infrastructure landscape, nuclear establishments constitute a category of singular consequence. The 2019 cyberattack on the Kudankulam Nuclear Power Plant saw Dtrack malware gain domain controller-level access to the plant’s administrative network. The plant operator, Nuclear Power Corporation of India Limited, initially denied any breach, only to retract within twenty-four hours. Dragos confirmed hard-coded facility-specific credentials within the malware, indicating a targeted intrusion pre-positioned against the facility.

However, the most serious precedent predates Kudankulam by a decade. The Stuxnet malware, deployed against Iran’s Natanz facility from 2009, demonstrated that air-gapped industrial control systems are not impenetrable. Transmitted via infected USB drives, Stuxnet destroyed over 1,000 centrifuges while feeding operators falsified sensor data displaying normal conditions for months. More importantly, the worm had reportedly infected 80,000 computer systems at critical infrastructure sites. The lesson: physical access vectors and patient multi-stage execution can defeat air-gap architectures regarded as absolute.

Agentic AI exacerbates this vulnerability. No human Security Operations Centre can simultaneously monitor the millions of nodes constituting India’s power grid, financial stack, telecommunications, and space infrastructure. An agentic system can continuously correlate anomalies across sectoral boundaries to identify coordinated attack signatures before they manifest.

No human Security Operations Centre can simultaneously monitor the millions of nodes constituting India’s power grid, financial stack, telecommunications, and space infrastructure.

The case for agentic AI in nuclear security rests on capabilities that Stuxnet and Kudankulam exposed as absent. A framework provisionally termed NuclearOS would deploy purpose-built agents governing nuclear C3 architecture. Such agents could process satellite, radar, and SIGINT data in real time, cross-verifying outputs to discount falsified readings that concealed Stuxnet’s activity. They could monitor access patterns to flag insider-threat indicators and initiate containment before meaningful access is gained. These capabilities can be integrated with air-gapped systems through data diodes that preserve control network isolation.

The foundational principle governing any NuclearOS implementation is unambiguous: agentic AI serves only as an advisory and detection function, with authority over all consequential decisions residing exclusively with designated human personnel.

Near-term threats include Chinese agentic cyber operations targeting power and financial infrastructure, AI-enabled ISR degradation along the LAC, and autonomous drone attacks on forward bases. Medium-term risks encompass AI-enabled information warfare and autonomous undersea threats to India’s second-strike capability. The longer horizon adds non-state actor access to commercial agentic AI, supply chain infiltration of India’s defence procurement, and “flash escalation” where autonomous systems interact faster than human decision-makers can intervene.

Deterrence, Strategic Dividends, and the Nuclear Paradox

Agentic AI offers India substantive deterrence dividends. Persistent autonomous surveillance along the LAC addresses the intelligence deficiency that has defined India’s strategic vulnerability since 1962. AI-enabled swarm systems offer cost-asymmetric denial deterrence without symmetric force expansion. Agentic attribution systems capable of rapidly identifying state-sponsored intrusions enable calibrated sub-conventional retaliation in the grey zone, where India currently lacks adequate instruments of response.

The nuclear dimension demands the most cautious policy design. India’s No First Use (NFU) doctrine depends on a credible second strike. Agentic AI in an early-warning architecture, governed along NuclearOS lines, can strengthen NFU credibility by ensuring reliable detection of incoming strikes. However, the same technology in adversarial hands may generate “use it or lose it” pressure on nuclear command structures, compressing political decision windows beyond what existing crisis frameworks were designed to manage.

Agentic AI in an early-warning architecture, governed along NuclearOS lines, can strengthen NFU credibility by ensuring reliable detection of incoming strikes.

A February 2026 preprint study had placed three frontier AI models in 21 simulated nuclear crises. Nuclear escalation was near-universal: 95 percent of scenarios saw tactical nuclear deployment; de-escalatory options went entirely unused. Individual deterrence strength and collective strategic instability can rise simultaneously — this is the central paradox India must plan around.

Therefore, India’s doctrinal position must be unambiguous: agentic AI may inform nuclear decision-making but must never substitute human authority within NC3 architecture. India should pursue Track 1.5 dialogue with China on behavioural norms for AI systems in proximity to nuclear decision processes – among the most consequential conversations absent from current diplomatic channels.

India needs to act fast to tackle this challenge. Potential pathways include expeditiously publishing a national military AI doctrine, creating a grid to defend critical infrastructure from AI-enabled threats, and advocating the ‘meaningful human control’ standard at the diplomatic level.

Conclusion

The rapidly evolving agentic AI landscape and its consequent impact on the battlefield and critical nuclear infrastructure demands that India move beyond reactive posturing. It needs not just adaptation to emerging threats but also sustained investment in harnessing AI to transform defence from reactive to proactive and build resilience. The cost of inaction is not merely strategic disadvantage; it is the erosion of deterrence itself.


Sameer Patil is the Director of the Centre for Security, Strategy, and Technology at the Observer Research Foundation.

Kavya Wadhwa is a nuclear energy advocate and policy analyst dedicated to promoting sustainable energy solutions and driving policy reforms. His research primarily focuses on nuclear energy, nuclear security, and climate change.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.



Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW