BIZTECH
7 min read
The limits of global AI control and what it means for the world order
Nations are grappling to adapt to an environment shaped by this fast-growing technology. It’s now a race against time.
The limits of global AI control and what it means for the world order
AI promises tangible gains in military effectiveness, and no serious power is willing to slow down. / AP
16 hours ago

Last month, China unveiled plans for a global body to oversee artificial intelligence premised on equal access to the technology, deeper cross-border collaboration, and steps to ease bottlenecks such as hardware restrictions and limits on talent exchange. 

Six months earlier, 58 countries met in Paris to endorse a declaration on “inclusive and sustainable AI,” which the United States and the United Kingdom declined to sign. 

The approaches differ, but the pattern is the same. The race to control AI is moving far faster than any attempt to regulate it.

Artificial intelligence is a force multiplier for economies, security systems and geopolitical leverage. 

In the past decade, it has moved from tech labs into military targeting suites, government procurement, and core infrastructure. The question of whether it can be meaningfully regulated has moved from seminar rooms to cabinet tables.

When the same model can assist in passing a medical licensing exam one day and generate realistic deepfake videos the next, it becomes clear that today’s oversight tools do not match the speed, versatility, or reach of modern AI.

Existing regulatory frameworks were built for slow advances and narrowly defined applications. They were not designed for systems that can produce human-level text, analyse satellite imagery in seconds, or coordinate fleets of autonomous machines.

This gap is driving calls for new governance mechanisms that can respond to its unique scale, pace, and dual-use potential.

The international dimension of the governance debate is driven by how new technologies change the risk calculus for conflict and instability. Some advances can reduce those risks by improving verification, communication, or deterrence. 

Others, especially those offering a clear battlefield edge, tend to raise military spending and intensify arms races. The conventional wisdom is that AI falls into the latter group.

The success of AI in surveillance, targeting, and autonomous systems reinforces the perception that falling behind would mean a serious strategic disadvantage. 

In theory, arms-control frameworks or lighter cooperative arrangements could help all sides by setting limits that avoid destabilising competition. In practice, these deals are rare. 

Coordination failures, mutual distrust, and AI’s dual-use nature make the kind of internationally shared restraint seen in past arms agreements far harder to achieve today.

Within national borders, regulation is possible for states with the institutional and technical muscle to keep up. At the global level, the forces driving AI adoption make broad, enforceable rules almost impossible.

RelatedTRT Global - Turkish defence firm Havelsan develops secure closed-loop AI system for corporate use

Domestic front

Domestically, governments hold the levers. Legislatures can pass statutes, regulators can issue rules, and agencies can demand compliance from developers operating in their jurisdiction. 

This is the arena where cultural, legal and economic context can be reflected in governance. A capable state can tailor rules to protect privacy, prevent algorithmic bias, and guard critical systems without suffocating innovation.

That ability, however, is unevenly distributed. Regulation is a capacity game. 

Effective oversight demands advanced technical infrastructure, regulators who understand the underlying systems, and institutions agile enough to adjust rules as technology changes. 

Without these, regulation risks becoming symbolic rather than effective.

Speed is the first constraint. AI systems can change significantly in a matter of months, while lawmaking usually moves in years. 

Rules locked into statute risk becoming outdated before they are applied. Countries that keep pace will be those able to use adaptive mechanisms such as rolling standards, regulatory sandboxes, and streamlined amendment processes to update oversight without legislative paralysis. 

States that fix rigid compliance rules in place risk failing at meaningful regulation.

The second constraint is expertise. In many countries, regulators, judges, and civil servants lack the technical knowledge to assess model architectures, training data risks, or system vulnerabilities. 

Without this capability, governments draft rules they cannot enforce. 

The weakness is most acute when AI underpins critical infrastructure, healthcare, or financial systems, where oversight failures can have immediate, high-impact consequences. 

Without skilled reviewers, even the best-written AI law remains little more than words on paper.

The third constraint is dependence. States that rely on foreign-owned AI platforms, cloud services, or semiconductor supply chains cannot fully enforce their own standards. 

If a country’s critical AI systems are trained and hosted abroad, regulators lose the power to audit or modify them. This is not an abstract sovereignty question

Without independent capability, national regulators operate at the mercy of external suppliers. Europe’s experience with foreign cloud dominance offers a cautionary precedent.

Even for capable states, finding the right balance is difficult. Overly strict rules drive investment and talent elsewhere, as developers gravitate to jurisdictions where experimentation is easier. 

Overly lax rules erode privacy, enable misuse and weaken national security. Striking the right balance requires institutional strength, political will and continuous calibration. Some will manage it; many will not.

International governance

The international picture is harsher. The primary reason is strategic. AI promises tangible gains in military effectiveness, and no serious power is willing to slow down.

In an arms race dynamic, restraint is politically toxic. The fear of falling behind, and the strategic disadvantage it could confer, outweighs the perceived benefits of collective limits.

The money confirms the momentum. The global military AI market, worth around $10 billion in 2024, is projected to grow at more than 13 per cent annually well into the next decade. 

In the US alone, defence AI spending is estimated at about $2 billion a year, with billions more going into autonomous systems and decision-support platforms. These budgets do not signal preparation for a pause. They are blueprints for acceleration.

Even if governments wanted to slow the race, the verification problem would remain insurmountable. 

One reason why nuclear arms treaties worked was that warheads and delivery systems were large, hard to hide and easy to count. 

AI models are not. They can be trained in secret, stored on consumer hardware, embedded in civilian applications and copied globally. 

An inspection regime for AI would require an intrusive level of digital surveillance that no major power would accept. In such an environment, the incentive to cheat is overwhelming.

Dual-use technology blurs the picture further. The same computer vision system used for medical imaging can be adapted for drone targeting; the same large language model that writes customer service scripts can generate convincing propaganda or misinformation. 

That means a treaty aimed at ‘military AI’ would inevitably touch civilian technology, something most countries are unwilling to submit to foreign inspection.

The state of the international order compounds the problem. Multilateral diplomacy is struggling even on less contentious issues such as climate targets, WTO reform, and genocide prevention. 

Trust among major powers is low and declining. The UN’s discussions on “killer robots” have dragged on for years without resolution, despite broad agreement among smaller states. 

The powers with the most advanced AI capabilities, such as the US and China, see more risk than benefit in binding themselves to limits that rivals may ignore.

The likely result is a fragmented international environment. States will pursue AI-enabled weapons and intelligence systems without agreed-upon guardrails.

A split strategy

Against this backdrop, the most practical course is a split strategy. At home, governments should build governance capacity by training regulators, developing auditing tools, and securing AI supply chains. 

Independence in compute, model development and hosting infrastructure is essential for any serious regulator.

Internationally, the goal should be narrow, enforceable agreements in high-risk areas such as prohibiting AI in nuclear command-and-control systems, banning certain classes of lethal autonomous weapons, or requiring notification for AI incidents affecting critical infrastructure.

Such agreements will be limited in scope and fragile in enforcement, but they can at least slow the most destabilising uses. 

Track II diplomacy, using informal channels between technical experts and military planners, can help build the trust and understanding needed for later formal agreements. 

Export control coalitions, such as the one already restricting high-end AI chips to certain destinations, can also play a role in managing proliferation.

But policymakers should be clear-eyed that the dominant dynamic internationally will remain competitive. 

AI will continue to move fast, driven by both commercial opportunity and national security imperatives. Regulation will lag, and the gaps will be filled not by multilateral treaties but by unilateral advantage-seeking. 

The AI era will be governed unevenly. Domestically, a few capable states will develop systems that balance innovation with safety and sovereignty. 

Internationally, the race will continue largely ungoverned, with occasional narrow agreements in the most dangerous niches.

SOURCE:TRT World
Sneak a peek at TRT Global. Share your feedback!
Contact us