Artificial Intelligence (AI) is no longer simply a driver of innovation; it is a structural force reshaping global security, governance, and the conditions of human agency. As AI capabilities advance at unprecedented speed, the gap between technological power and political oversight is widening into a systemic risk (United Nations, 2024). For the disarmament and human security community, this is not a gradual evolution, but a strategic inflection point — one in which the preservation of human agency becomes inseparable from the preservation of strategic stability itself.
AI and the Erosion of Traditional Arms Control
Traditional arms control frameworks—such as the Treaty on the Non-Proliferation of Nuclear Weapons, the Biological Weapons Convention, and the Chemical Weapons Convention—were built on the assumption that dangerous capabilities depend on access to materials, infrastructure, and specialized expertise. AI disrupts this foundation by decoupling capability from physical constraints and embedding strategic power in software, data, and algorithms (Stockholm International Peace Research Institute).
Commercially developed AI systems are increasingly integrated into military functions, including intelligence analysis, surveillance, and target identification. In recent conflicts, algorithmic tools have accelerated targeting processes to speeds beyond meaningful human deliberation. This raises a critical legal and ethical question: who is accountable when a semi-autonomous system contributes to a lethal error? The emerging accountability gap challenges the applicability of existing international humanitarian law and underscores the urgency of reaffirming meaningful human control over the use of force (International Committee of the Red Cross, 2021). Current multilateral debates on Lethal Autonomous Weapons Systems at the United Nations illustrate both the recognition of this risk and the political difficulties of constraining it (United Nations Institute for Disarmament Research).
AI also lowers barriers to the generation and dissemination of sensitive knowledge. Advanced models can simulate chemical interactions or assist in designing molecular structures, capabilities that—while valuable for scientific progress—may also be misused. The risk is no longer limited to the possession of prohibited materials, but extends to access to algorithmically generated knowledge. Disarmament must therefore address not only weapons, but the cognitive and digital infrastructure that enables their development and proliferation (United Nations Institute for Disarmament Research).
Human Security Under Algorithmic Governance
At the level of human security, AI is transforming civilian life in ways that carry profound risks. The deployment of facial recognition technologies in public spaces has enabled unprecedented forms of mass surveillance that chill dissent and civic participation. Algorithmic decision-making systems in employment, finance, migration control, and public services can reproduce and amplify structural inequalities, often under a veneer of neutrality (Organisation for Economic Co-operation and Development, 2019).
Meanwhile, synthetic media technologies have demonstrated the capacity to fabricate highly realistic audio and video content, undermining the integrity of information ecosystems. Manipulated media involving public figures and electoral processes has circulated widely, illustrating how easily AI can erode the distinction between truth and fabrication (United Nations, 2024). These developments do not merely affect individual rights; they corrode the shared reality upon which democratic governance depends. Human security is not only about protection from physical harm—it is about preserving dignity, trust, and the capacity for informed participation in public life.
Environmental and Justice Dimensions of AI
AI also carries significant environmental implications. Training large-scale models requires substantial computational resources, consuming energy at levels comparable to small urban centers and placing increasing demands on water resources for data center cooling (Organisation for Economic Co-operation and Development). These environmental costs are often borne disproportionately by regions with limited regulatory capacity or existing resource constraints. As a result, communities that contribute least to AI innovation may pay the highest price in terms of water stress, pollution, and energy burdens, deepening global inequities and environmental injustice.
A Governance Agenda for Disarmament and Human Security
Despite these multidimensional risks, governance efforts remain fragmented and insufficient. Policy inertia is often justified by persistent narratives: that technological innovation cannot be effectively regulated, that markets will self-correct, and that geopolitical competition necessitates regulatory restraint. Such assumptions are not only misleading—they are dangerous. The trajectory of AI is not predetermined; it is the outcome of political and ethical choices (United Nations, 2024).
Addressing this challenge requires a coordinated and forward-looking policy response grounded in disarmament and human security principles. States should advance international discussions on Lethal Autonomous Weapons Systems with the aim of establishing clear norms that ensure meaningful human control over the use of force (United Nations Institute for Disarmament Research). Existing legal frameworks, including the Geneva Conventions and the Arms Trade Treaty, should be interpreted and, where necessary, adapted to address AI-enabled systems (International Committee of the Red Cross).
Governments must also implement pre-deployment risk assessment mechanisms for high-impact AI applications, particularly those with dual-use potential. Transparency requirements, independent auditing, and accountability across the AI lifecycle are essential to closing the accountability gap (Organisation for Economic Co-operation and Development).
Finally, the international community must strengthen multilateral coordination to prevent an unregulated race toward technological dominance. AI risks are inherently transboundary and cannot be effectively managed by any single state (Stockholm International Peace Research Institute).
Conclusion
The central challenge posed by AI is not that machines will act independently of humans, but that critical systems may operate independently of human values and public accountability. Delay in governance is no longer a neutral position; it is a strategic risk multiplier.
Preserving human agency is therefore not an abstract ethical goal; it is a core imperative for disarmament and human security in the age of artificial intelligence. AI must remain a tool that serves humanity—not a force that redefines it without consent.
Subscribe to Our Newsletter
Get the latest CounterCurrents updates delivered straight to your inbox.
Dr. Ghassan Shahrour, Coordinator of Arab Human Security Network, is a medical doctor, prolific writer, and human rights advocate specializing in health, disability, disarmament, and human security. He has contributed to global campaigns for peace, disarmament, and the rights of persons with disabilities.
Click Here For The Original Source.
