The swift and staggering growth of artificial intelligence has led to many concerns worldwide, but none perhaps as pronounced as its spread in the military domain.
For the pacifists and peaceniks, as well as millions of suffering citizens from Palestine to Ukraine, what is even more worrying is the kind of attention governments around the world are paying to AI.
At a conservative estimate, at least 20 countries have established military AI programmes.
The Pentagon alone is currently running more than 800 AI-related projects, which span from autonomous vehicles to intelligence analysis and decision assistance.
China, too, is investing heavily in intelligentised warfare.
Hence, the conversation around artificial intelligence and warfare is no longer theoretical.
As AI becomes embedded deeper into military planning and operations, it is changing how wars are fought and reshaping the fundamental dynamics of conflict, such as escalation and restraint.
Already in use
When artificial intelligence enters the conversation about warfare, the first images are almost cinematic.
Autonomous drones striking targets, cyber units launching attacks, machines making life-and-death decisions without human involvement.
This is not the future. AI is no longer an experiment at the edges. It is already shaping how militaries plan, fight, and think about power.
The uses are wide and growing. AI surveillance systems can scan thousands of feeds at once and flag threats faster than any human team.
Autonomous vehicles are being trained to find targets with little direct oversight. AI-powered integrated air defence systems are capable of identifying and neutralising incoming aerial threats with a speed and precision that no human-operated system can match.
Both cyberattacks and cyber defence strategies increasingly rely on AI to outpace and outmanoeuvre each other in a constantly shifting digital battlefield.
Even actual battlefield decisions are being increasingly guided by algorithms that can process information faster than traditional chains of command.
Some of these developments are already sparking global alarm, including Israel’s deployment of AI systems like Gospel and Lavender in its ongoing war on Gaza. These platforms were used to generate targeting recommendations for air strikes, often with minimal human oversight.
Available evidence points to their role in the high civilian death toll and the erosion of distinctions between combatants and civilians.
In these instances, the brutality witnessed is not merely a consequence of AI deployment.
The core issue lies in political decision-making regarding how leaders choose to employ these tools, the level of civilian risk they are willing to tolerate, and the extent to which they tether AI outputs to human judgement.
Here, AI serves as a mirror of those in power, rationalising political choices that push aside moral and legal responsibility under the guise of machine-driven efficiency.
However, a deeper shift is underway. AI is beginning to reshape the dynamics of conflict itself, such as initiation and escalation, in ways we are only starting to understand.
AI and conflict
International relations scholarship has long emphasised elements such as uncertainty, signalling, and credibility in shaping the dynamics of conflict. AI is reshaping each of these elements in ways that demand serious attention.
First, AI’s ability to make rapid decisions and take autonomous action may lower the threshold for engaging in conflict. When machines can assess threats and launch responses within milliseconds, the political and operational costs that once made leaders think twice about initiating hostilities are diminished.
This shift increases the risk of crises spiralling out of control before human decision-makers can intervene. Traditional barriers to escalation, such as deliberation time, the visible buildup of forces, and the friction of human decision-making, are breaking down.
Second, the integration of AI undermines traditional deterrence models. Deterrence relies on clear communication and predictable retaliatory frameworks.
Yet AI systems can introduce new ambiguities by creating opaque decision-making processes that make it harder for adversaries to read intentions or assess the credibility of threats.
If an AI-enabled military posture appears capable of launching rapid preemptive strikes, rivals may feel pressure to act even faster, which would fuel instability.
Third, AI increases the risk of novel forms of escalation.
Machine-to-machine interactions, such as autonomous drones reacting to autonomous missile defences, could produce feedback loops that humans struggle to anticipate or interrupt.
Escalation could become more difficult to manage precisely because the speed and complexity of AI systems outstrip human organisational capacities.
Taken together, these trends signal a shift, though scholars remain divided over where exactly this transformation is leading.
Nonetheless, the danger lies in the quiet transformation of the very structures that have historically governed conflict, such as the mechanisms of restraint, the processes of signalling, and the pathways for de-escalation.
Strategic stability, once grounded in assumptions about human judgement and deliberation, may be upended by a new reality.
Deliberation becoming a liability in war
The involvement of AI in warfare is not simply speeding up familiar dynamics. It risks forcing a world built on human timing, human ethics, and human risk calculations to contend with a battlespace increasingly governed by the logic of machines.
At the heart of this shift is the strategic compression of decision time.
With AI-driven systems analysing battlefield data in real-time, the window between detection and response is collapsing.
In such a world, human deliberation, once considered the essence of prudent statecraft, becomes a liability. The faster a military can process and act, the greater its tactical advantage.
This implies a brutal logic. In a race where reaction time is decisive, militaries will feel compelled to automate more and deliberate less.
Traditional strategic cultures, which are built around cycles of reflection, communication, and calibrated response, may become misaligned with battlefield realities engineered by machines.
In this time regime, strategy risks becoming a residual artefact of automated interactions rather than a matter of conscious choice.
At the same time, AI will enable the weaponisation of autonomy gaps.
Wherever ethical or legal frameworks constrain how autonomous systems can act, adversaries will seek to exploit those self-imposed restrictions. As a result, wars will also be shaped by which side is willing to accept fewer constraints on machine agency.
Moreover, the level of autonomy allowed in military AI systems will become a form of coercive signalling.
By showing that their systems can operate without human-in-the-loop safeguards, actors can signal a greater willingness to escalate violence and pressure opponents to either match the risk or hold back.
Even before any conflict begins, higher demonstrated autonomy could intensify the coercive impact.
The integration of AI into military affairs is not just about new technologies. The risks of miscalculation, uncontrolled escalation, and moral erosion grow sharper.
The future of conflict will be shaped not only by who builds the most advanced systems, but by who chooses how, and how far, to unleash them.
(This is the second of a four-part series on how AI is changing the world. Next: The economic shift)