Sovereign AI and the Geopolitics of Digital Fragmentation: How Middle Powers Navigate Between Washington and Beijing in the Most Important Technological Race of the Century

12 Min Citire
Foto Atlas News

Sovereign AI has ceased to be merely a technological issue. It has become a matter of sovereignty, national security, and positioning within the global order. On March 1, 2026, Iranian drones directly struck two Amazon data centers in the United Arab Emirates and damaged a third in Bahrain — the first confirmed military attack on a hyperscale cloud provider in human history. Overnight, what had been an abstract debate about digital autonomy acquired an entirely concrete dimension: AI infrastructure can be destroyed, and who controls it matters more than ever.

On November 18, 2025, at the European Summit for Digital Sovereignty in Berlin, President Emmanuel Macron issued a warning with unusual clarity for European diplomacy: „Our goal is to design our own solutions, to preserve our sovereignty, and to refuse to become a vassal.” Europe, Macron said, does not want to become the client of major entrepreneurs or of major solutions supplied either by the United States or by China.

Four months after that summit, following the Iranian attack on cloud infrastructure in the Gulf, Macron’s warning no longer sounds rhetorical. It sounds prophetic.

Middle powers that fail to secure influence over the development, deployment, and governance of artificial intelligence will likely cede control over their economies, societies, political systems, and positions in the global economy. This is not a long-term prediction. It is a description of a process already underway.

Publicitate
Ad Image

Absolute Dominance and Its Paradox

The global balance of power in AI is brutally concentrated. The United States and China dominate the global race by an overwhelming margin — with the US accounting for approximately 75% of global AI supercomputing performance, compared with 15% for China. Traditional technological powers such as Germany, Japan, and France have been pushed into a subordinate role, in a dramatic redistribution of technological influence.

The fundamental paradox confronting all other states today is that global dependencies on American and Chinese technology are unavoidable — yet greater sovereignty over AI deployment will allow smaller countries to develop their own technological pathways, capable of prioritizing the needs of their populations. In other words, full independence is an illusion, but total dependence is an abdication. Between these two extremes, real sovereignty is either built — or it is not.

Sovereign AI capabilities are becoming as fundamental to national power as military strength or economic policy. This equivalence — sovereign AI alongside the military and the economy as pillars of state power — represents in itself a major conceptual shift in international relations theory.

Four Strategies, Three Bets

Chatham House identifies four pragmatic paths that middle powers can follow in order to secure real influence over the future of AI. In practice, the three cases that are already defining the global landscape illustrate three of these strategies with remarkable clarity: alignment with one of the AI superpowers in exchange for access and protection; specialization in a specific segment of the global AI supply chain; and alignment with risk hedging through the simultaneous development of sovereign capabilities.

Each strategy entails real trade-offs. None guarantees full sovereignty. But the difference between deliberately choosing one of them and choosing nothing at all is the difference between being an actor and being a playing field.

Alignment: the United Arab Emirates and the Stargate bet. Abu Dhabi chose the American camp with deliberate clarity. G42, the semi-governmental Emirati technology company, agreed to sever ties with Huawei under pressure from Washington, amid US national security concerns, and divested all of its investments in China. Abu Dhabi subsequently made the strategic decision to place its full bet on American technology in order to fulfill its ambitions in AI.

As a result, Stargate UAE — a 1-gigawatt compute cluster built by G42 and operated in partnership with OpenAI, Oracle, Nvidia, Cisco, and SoftBank — is set to deliver its first 200 operational megawatts in 2026, with the potential to provide AI infrastructure and compute capacity within a 2,000-mile radius, reaching up to half of the world’s population. The UAE-US campus that will host Stargate UAE will span 10 square miles and provide 5 gigawatts of total capacity — the largest AI infrastructure project outside the United States.

The cost of that alignment is visible and deliberately assumed. The United Arab Emirates has traded part of its strategic autonomy in exchange for access and protection — a form of calculated sovereignty, not surrendered sovereignty.

Specialization: India and the bet on human talent. India has launched its own sovereign language model and positioned itself as a voice of the Global South in AI governance. New Delhi is betting on the hardest asset in the AI chain to replicate: specialized human capital. It is a long-term strategy, with uncertain short-term results, but with genuine structural potential.

Alignment with hedging: Saudi Arabia and HUMAIN. Riyadh has chosen a more complex path than Abu Dhabi. HUMAIN — a company launched by the Public Investment Fund — has signed partnerships with Nvidia and with Elon Musk’s xAI. Saudi Arabia is building capabilities of its own, but remains heavily dependent on American infrastructure. Riyadh’s stated objective is to become the world’s third global AI player after the United States and China — an ambition that, if realized, would transform the balance of power in the global AI industry more fundamentally than any geopolitical agreement of the past twenty years.

The First Military Attack on a Hyperscale Data Center: What It Changes

On March 1, 2026, Iranian drones directly struck two Amazon Web Services data centers in the United Arab Emirates, knocking out two of the three availability zones in the ME-CENTRAL-1 region and triggering disruptions across dozens of cloud services. A third data center, in Bahrain, sustained damage as a result of a nearby explosion. AWS confirmed that the strikes caused structural damage, interrupted power supply, and, in some cases, triggered fire suppression systems, producing additional damage. The Uptime Institute described the incident as the first confirmed military attack on a hyperscale cloud provider in the history of the industry.

The Islamic Revolutionary Guard Corps stated that it had specifically targeted the facility in Bahrain, citing the use of commercial cloud infrastructure in support of enemy military operations. This reality of dual use — commercial infrastructure supporting, directly or indirectly, military operations — raises a question to which international humanitarian law still has no clear answer: can data centers be considered legitimate military targets?

The disruptions affected the transport and delivery platform Careem, the payments companies Alaan and Hubpay, the banks Abu Dhabi Commercial Bank and Emirates NBD, the data management company Snowflake, and numerous other enterprise clients in the region. AWS warned customers that „the broader operating environment in the Middle East remains unpredictable” and recommended the immediate migration of data to alternative regions.

What the Crisis Exposed: Sovereignty as Physical Vulnerability

The Iranian attack on cloud infrastructure in the Gulf brought to the surface a structural contradiction that sovereign AI strategies had systematically overlooked: the geographic concentration of AI infrastructure, even when legally „sovereign,” creates physical vulnerabilities that no data policy can fully mitigate.

Sam Winter-Levy, a fellow at the Carnegie Endowment for International Peace, warned that such physical attacks will become increasingly frequent as AI grows in significance. „Suddenly, protecting data centers becomes similar to protecting high-security government offices,” Winter-Levy said.

The international legal architecture is only beginning to process the implications. Every multinational company with data in the Gulf is now conducting urgent assessments of geographic risk exposure. Every insurance underwriter is reassessing war risk for technology assets.

Doug Madory, Director of Internet Analysis at Kentik, summarized the systemic risk: seventeen submarine cables cross the Red Sea, carrying the majority of data traffic between Europe, Asia, and Africa. With the Strait of Hormuz blocked and persistent Houthi threats in the Red Sea, both critical data chokepoints are simultaneously located in zones of active conflict. „A simultaneous blockage of both chokepoints would constitute a global disruption event. I do not know of that ever having happened,” Madory said.

Partial Sovereignty Is the Only Possible Sovereignty

The real challenge for middle powers is not to replicate the „full-stack” strategies of the superpowers, but to build resilient and sustainable positions within an interdependent global AI order. By recognizing that AI sovereignty is multi-layered, middle powers can move beyond the false dichotomy of winning or losing and focus instead on strategic positioning.

The Iranian attack of March 1 brutally simplified this equation. This is no longer a debate about data architectures or governance protocols. It is a debate about where one physically places the infrastructure that underpins the economy, security, and society — and about what happens when someone decides that infrastructure is a legitimate target.

States that do not have a sovereign AI strategy in 2026 are not losing an abstract technological race. They are losing the ability to control their economy, protect their society, and project their interests in an international order in which data, chips, and data centers have become the new oil — with one essential difference from oil: they can be destroyed by a drone.


This analysis was prepared on the basis of the Chatham House report „The Case for Sovereign AI Strategies” (February 16, 2026), as well as reporting by France 24, AFP, the official Élysée website, Fortune, Reuters, Tom’s Hardware, CNBC, The Register, Rest of World, TechPolicy.Press, Data Center Dynamics, Epoch AI, the Carnegie Endowment for International Peace, Kentik, G42, and OpenAI, together with public statements by President Emmanuel Macron, Sam Winter-Levy (Carnegie Endowment), and Doug Madory (Kentik).

Read also

From Drawing-Room Diplomacy to Transactional Statecraft. A Century of Diplomatic Order Confronting the Most Abrupt Paradigm Shift of the Post-War Era

Distribuie acest articol
Niciun comentariu

Lasă un răspuns

Adresa ta de email nu va fi publicată. Câmpurile obligatorii sunt marcate cu *