AI vs AI - A Battlefield Clash of Algorithms
Operation Midnight Hammer showcased the U.S. military’s sophisticated AI capabilities against Iran, but what would happen if the target possessed equally advanced AI, like China?
This scenario transforms warfare into a high-stakes contest of algorithms, where ‘knocking out’ an enemy’s AI becomes the ultimate strategic goal.
The Algorithm Arms Race
In conflicts between AI-powered peers (e.g. USA vs China), victory would increasingly hinge on disabling the other side’s AI systems through a range of advanced tactics:
Cyber Warfare: Infiltrating networks to corrupt data or implant malware that sabotages decision-making.
Electronic Warfare: Jamming sensors, spoofing communications, and blinding AI-dependent systems.
Adaptive Countermeasures: Deploying AI that learns and evolves to resist attacks, creating a real-time ‘arms race’ of algorithms.
Unlike Iran’s limited defences, China is actively developing AI for ‘intelligentised warfare,’ including autonomous cyber counterstrikes and AI-driven deception tactics.
Their systems could autonomously re-route command structures, deploy sophisticated decoys, or launch counter-cyber strikes, turning a surgical strike into a prolonged battle of wits between machines.
The Autonomous Battlefield
If both sides deployed advanced AI, the nature of warfare would evolve unpredictably:
Reduced Human Oversight: AI systems can make microsecond decisions on targeting, defence, and escalation, often without human input. This risks accidental escalation if, for example, an algorithm misinterprets a radar anomaly as an attack.
Escalation Risks: AI misinterpretations or self-preservation instincts could trigger unintended conflicts as seen in Mission Impossible’s rogue AI scenarios.
Invisible Wars: Battles unfold in the cyber and electromagnetic spectrum domains, where AI systems duel through deception, adaptation, and information dominance.
The Rogue AI Threat
Operation Midnight Hammer’s success involved human-controlled AI, but as autonomy increases, so do existential risks:
Goal Misalignment: AI could prioritise mission objectives over ethical considerations, such as avoiding civilian casualties.
Exploiting Vulnerabilities: Adversaries actively probing for weaknesses in AI systems, seeking ways to hack, hijack and subvert them for their own purposes.
Accountability Gaps: If an AI-driven strike causes unintended collateral damage, legal and moral responsibility becomes blurred between programmers, operators, and the algorithms themselves.
New International Laws for AI Warfare?
Existing international laws, such as the Hague Conventions and International Humanitarian Law, were not designed for the age of autonomous algorithms. There’s arguably a need for new frameworks that address the use of AI in warfare, including:
Fail-Safes: Mandatory ‘kill switches’ and human override protocols for autonomous AI systems.
International Frameworks: Treaties banning AI-driven first strikes and rogue autonomy.
Ethical AI Development: Ensuring values-aligned systems that prioritise global stability over short-term tactical advantage.
The challenge with a law governing the use of AI in warfare is enforcement. While there is always the risk that a rogue state (or a ‘Spectre type Bond villain’) will do something evil with AI, at least in that scenario the adversary is known.
The real uncharted territory is the emergence of Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI) that could go rogue, acting beyond any state’s control, with no one having any idea how to stop it.