Who will save the world from a US-China AI arms race?
2026 has begun with a worsening trust deficit, as geopolitical rivalry between the United States and China ruptures the international system. Much of this mistrust stems from an escalating technology race.
At centre stage is artificial intelligence – the foundational technology for virtually all industries, from hyper-scaled computer networks and data centres to self-learning “cognitive” machines and advanced semiconductor production.
The US and China now face a prisoner’s dilemma in military AI: both would be safer with restraint, yet each accelerates development to avoid falling victim to the other. Much of the fuel for this race flows directly from Silicon Valley, where OpenAI, Anthropic, Google DeepMind and other tech behemoths compete with each other at breakneck pace, flooding the market with powerful new AI systems that are largely unrestrained and unregulated.
All of this AI is “dual-use” – commercial technology that can be repurposed for military applications – making it subject to the geopolitical machinations of both AI superpowers.
We must now confront a series of AI-related developments with direct bearing on global stability and peace. Four inconvenient truths demand our attention.
First, responsible nations and non-state actors are failing to address the scale of the threat posed by unrestrained AI competition, in both commercial and geopolitical contexts.
Second, the world’s AI superpowers have elected to pursue self-interested, techno-nationalist priorities rather than cooperate on AI safety for the greater good.
Third, the task of building functional international AI safety frameworks now falls upon a group of AI “middle powers”, including countries in Europe, Australia, Canada and parts of Asia.
And fourth, these middle powers must work closely with the very Silicon Valley entities that dominate their national technology stacks – despite a strong desire to achieve AI sovereignty.
AI-related threats
It’s increasingly likely that an unrestrained AI arms race will not only transform how wars are fought, but that armed conflicts – even a world war – could be triggered or enlarged by faulty or compromised AI.
Both the US and China are ramping up production of autonomous weapons systems, including drone swarms and land-based combat robots. These “thinking machines” are purpose-built for complex, independent decision-making, including the decision to kill.
AI has also enabled the weaponisation of biotechnology and the development of “zero-day” cyberweapons: instruments of mass disruption capable of crippling the world’s digital infrastructure.
Looming above all of this is the existential threat of AI itself. As systems evolve towards artificial general intelligence – and potentially superintelligence – they pose a risk to humanity’s very survival.
As far back as 2015, renowned astrophysicist Stephen Hawking warned that uncontrolled AI could spell the end of the human race. That same year, he signed an open letter alongside hundreds of tech luminaries, including Bill Gates and Elon Musk, calling for greater focus on AI safety.
At Davos 2026, Anthropic CEO Dario Amodei echoed those concerns, sounding the alarm on AI’s rapid development and the dangers it poses for global security.
In a perfect world, primary responsibility for mitigating an AI arms race would fall squarely on the US and China. Last year, these two AI hegemons collectively accounted for more than 70 per cent of global AI investment, 61 per cent of AI talent and 80 per cent of breakthrough research.
Instead, both have pursued their own techno-nationalist agendas. The US’ AI Action Plan pays lip service to the dangers of unchecked AI while prioritising the dominance of the American tech stack worldwide. Washington’s tech diplomacy aims to pre-empt and crowd out China’s expanding global footprint through “deal-making” and strong-arm tactics.
China’s Global AI Governance Action Plan similarly invokes AI safety while advancing Beijing’s techno-nationalist agenda. Unlike Washington’s scepticism towards multilateral institutions, Beijing seeks to leverage bodies such as the United Nations to promote Chinese AI standards, open-source infrastructure and linkages to Chinese companies. Make no mistake: like Washington, Beijing’s geoeconomic interests override any save-the-planet kind of commitment to AI safety.
The middle path
Responsibility for global AI safety has now shifted to our next best hope: a reluctant coalition of middle-tier countries. Europe’s leading tech players – the United Kingdom, Netherlands, Germany, France, Italy and the Nordic countries – form the core, with Australia, Canada and Japan also playing important roles. India has potential, though its influence remains to be seen.
Most of these nations are deeply reliant on American technology companies to build and operate their critical infrastructure, yet they must mobilise public resources, international organisations, academia and civil society around AI transparency protocols, shared evaluation systems and common rules.
Some of this groundwork has already been laid. In 2024, the Council of Europe’s Framework Convention on AI became the first legally binding international instrument in this domain, with early signatories including Canada, Japan and the UK. In 2023, Japan led the G7 Hiroshima AI Process, producing voluntary safety principles and a code of conduct for advanced AI systems. Earlier, the 2022 Bletchley Declaration aligned the US, China, the UK, Australia and Canada on frontier-AI risk mitigation.
A useful governance model is CERN – the Switzerland-based international organisation that operates the Large Hadron Collider near Geneva. Debates on AI safety increasingly invoke the “CERN Model” as shorthand for a treaty-backed, multinational consortium that pools funding, shares infrastructure and establishes common oversight of high-risk technologies.
For middle powers, mitigating the risks of a US-China AI arms race comes down to two imperatives.
First, they must build new, functional institutions, led by qualified, well-compensated humans with deep knowledge of AI and frontier technologies. These bodies must adapt to the 21st-century landscape the same way governments in the 20th century built specialised agencies to manage nuclear energy, aerospace and pharmaceutical sciences.
In 2023, the US established the AI Safety Institute – then scrapped it in early 2025, replacing it with the Centre for AI Standards and Innovation, reflecting a clear shift away from safety towards “winning” the AI race against China.
Second, middle powers must engage directly with Silicon Valley’s technology giants – the true ground zero of the geopolitical AI arms race. Tech CEOs including Anthropic’s Amodei, OpenAI’s Sam Altman and Google DeepMind’s Demis Hassabis have all called for government leadership on AI safety regulation.
Middle powers should seek channels through academic exchanges, non-governmental organisations and the digital commons to encourage AI safety cooperation with both China and USA. The actions of these two AI superpowers will continue to have a disproportionate impact on AI risks, therefore middle powers must not give up on efforts to pull them in.
Long-standing Western alliances and deep mistrust of Beijing, however, dictate alignment with Silicon Valley first, though it may be possible for middle powers – like countries diversifying free trade agreements – to sign onto as many AI safety groupings as possible.
Not everyone agrees. Alex Karp, CEO of Palantir Technologies – valued at US$330 billion – rejects the middle powers framework. Palantir’s business model, and that of others, is built to exploit US-China rivalry. For Karp, AI safety simply means America and its allies prevailing in the innovation race.
The AI safety window for action is closing fast. The world’s middle powers must move, and they must move now.