Sovereign AI: $608 billion, four playbooks, and the Chinese state stack

Disclosed government AI funding across 14 countries totals $608 billion, exceeding the $490 billion in disclosed private AI funding. Four playbooks produce the total. The Chinese state stack is the largest of the four, and the one Western analysis treats most superficially.
Sovereign AI: $608 billion, four playbooks, and the Chinese state stack

Sovereign AI: $608 billion, four playbooks, and the Chinese state stack

Disclosed government AI funding across 14 countries totals $608 billion as of April 2026, exceeding the $490 billion in disclosed private AI funding. The dollar figure is the easy part. The hard part is that the four sovereign-AI playbooks producing it look so different that calling them all "sovereign AI" obscures more than it reveals, and the largest single program (China's) is also the one Western press covers least carefully.

The Nextomoro Atlas tracks 21 countries with material disclosed AI activity. Fourteen of them have a publicly committed government AI investment line. Adding those lines together returns $608 billion. By contrast, the disclosed private capital deployed across all 209 labs in the dataset sits at $490 billion. For the first time, the public sector globally has committed more capital to AI than the venture and strategic-investor base.

Almost half the government total ($280 billion) is the United States, where the figure rolls up the CHIPS and Science Act, NAIRR, NSF AI Institutes, DARPA, ARPA-H, and IARPA into a single line that funds 107 distinct labs. China is another $150 billion. That leaves $178 billion across the remaining 12 countries that have published a meaningful AI investment commitment. The $178 billion lives in patterns no one would have predicted five years ago.

The four-playbook structure underneath the $608 billion matters. So does an asymmetry inside it: the Chinese AI state is the largest, fastest-moving, and most architecturally coherent of the sovereign-AI programs in the dataset, and it is the one Western analysis treats most superficially. This essay walks through the four playbooks, then takes a deep look at the architecture of the Chinese AI state across capital, compute, and regulation. For the funding-mechanics view of how individual Chinese frontier labs raised their rounds in 2024-2026 (the BAT-led era, the state-capital arrival, the Hong Kong listing window), see the companion essay How China funds its AI labs. This piece treats China at the state-architecture level rather than the venture-cycle level.

The four playbooks

Sovereign-AI programs around the world cluster into four distinct shapes. They differ in capital concentration, governance, exit strategy, and what kind of national capability they are trying to produce.

Playbook 1: petrostate national champion. One government-owned or government-anchored entity receives the bulk of national AI capital. Vertical integration is explicit: compute, models, applications, chips. Lab count is tiny relative to capital deployed. Operating partnerships with US technology companies are central to the strategy and are explicitly built into the cap-table architecture. The two clean examples are the United Arab Emirates (G42, MBZUAI, TII, Core42, Inception, all state-anchored, all inside the Mubadala/MGX sovereign-wealth perimeter) and Saudi Arabia (HUMAIN as the Public Investment Fund's national AI champion, with KAUST AI Initiative as the academic anchor).

Playbook 2: state-architectural integration. The state coordinates capital, compute, regulation, and procurement as a single integrated stack. The strategy treats AI as critical national infrastructure on the same plane as electricity grids, telecommunications networks, or rail. Multiple commercial labs operate within an architecture that the state defines through industrial-policy capital, technology procurement, and regulatory gatekeeping. The clean example is China. This playbook is the most consequential of the four and gets a deep treatment in its own section below.

Playbook 3: chaebol-coordinated industrial program. State capital coordinates parallel investment across major industrial conglomerates, supplemented by foundation-model funding rounds and a state-anchored research backbone. The corporate research labs of the conglomerates absorb most of the talent. A small number of independent insurgents get state subsidy support. South Korea's K-Sovereign AI initiative and Japan's GENIAC accelerator are the clean examples.

Playbook 4: public-private champion or distributed ecosystem. The remaining countries cluster into two related approaches. The public-private champion model rallies state funding plus private patronage behind one or two named national-champion companies (France with Mistral is the unambiguous case). The distributed-ecosystem model spreads capital across many labs through academic grants, agency programs, and private venture capital with no single national champion (the US, Canada, Germany, Switzerland). Both share a common feature: the state's role is convening capacity and selective subsidy, with the operational AI work happening primarily in the private or academic sector.

A fifth pattern shows up in smaller economies: the mission-oriented program focused on language sovereignty or sector-specific applications. India's IndiaAI Mission anchors three Indic-language-focused labs. Singapore's AI Singapore programme runs SEA-LION as the Southeast-Asian-language flagship. Vietnam's National AI Strategy points VinAI Research at the Vietnamese-language and consumer-data wedge.

The cleanest cases sit at the corners of these playbooks. Most countries occupy positions in the interior. Korea is principally chaebol-coordinated but borrows from the distributed model. France is a public-private champion that runs distributed elements. The US is principally distributed but the CHIPS Act and recent national-security AI programs introduce elements that look more state-architectural than the historical US model has been. The pattern is real even where the specific country fits awkwardly.

The MENA petrostate model

The most aggressive sovereign-AI play among the smaller national-champion programs is the MENA petrostate model.

MENA has 11 labs in the dataset and $151 billion in disclosed government AI funding, which works out to $13.7 billion of government funding per lab. North America's per-lab average is $2.5 billion. Saudi Arabia alone has 2 labs and $100 billion committed, which is $50 billion per lab. The United Arab Emirates has 5 labs and $50 billion committed, which is $10 billion per lab. These are oil-revenue capital allocations being routed through sovereign wealth funds into AI as a strategic-industry bet.

The UAE was first and is furthest along. G42 was founded in 2018, anchored in Abu Dhabi, backed by Mubadala. In April 2024, Microsoft made a $1.5 billion strategic investment in G42 that did three things at once: it provided G42 with privileged access to Azure compute, it routed G42's geopolitical exposure through a US partner that satisfied CFIUS-style concerns, and it embedded a US technology-export framework into the UAE's AI build-out. MBZUAI, founded in 2019, became the world's first dedicated AI graduate university and now operates as the talent pipeline for the broader G42 system. The Technology Innovation Institute (TII) released the Falcon model family. Core42 and Inception sit underneath as application-layer subsidiaries. The five entities operate as a single coordinated capability stack. The state capital backing them runs through Mubadala, ADQ, and (most recently) MGX, the explicitly AI-mandated sovereign-wealth vehicle.

Saudi Arabia is following the same template at larger scale and with later timing. HUMAIN was launched in 2025 with explicit positioning as the Public Investment Fund's national AI champion, with Crown Prince Mohammed bin Salman serving as chairman. The mandate covers compute infrastructure, foundation-model development (the ALLaM Arabic LLM is the publicly visible flagship), application development for Vision 2030 priority sectors, and chip procurement at sovereign scale. KAUST AI Initiative anchors the academic side. Like the UAE, Saudi Arabia has anchored its AI play with US-technology partnerships: the Microsoft and Nvidia announcements during the May 2025 Riyadh summit included roughly $40 billion of combined commitment to the HUMAIN ecosystem.

Israel is the MENA outlier with the largest disclosed private AI funding base in the region ($15.8 billion across AI21, Decart, and Mobileye) and the smallest disclosed government commitment ($1 billion). Qatar runs a smaller research-and-applications focus through QCRI at Hamad Bin Khalifa University.

What this combination produces is consequential. The UAE and Saudi Arabia together hold $150 billion in disclosed government AI commitment across 7 labs. That capital base is structurally similar in scale to OpenAI plus Anthropic plus xAI's combined disclosed private funding. The MENA petrostates are operating frontier-AI capability with capital pools that match the largest US frontier labs, with national-security-grade governance, and with explicit US-partner integration. The model is replicable elsewhere only by countries with comparable sovereign-wealth depth.

The Chinese state-AI architecture

China is the largest single sovereign-AI program in the dataset and the one whose internal architecture deserves the most careful treatment. The headline figure of $150 billion in disclosed government AI commitment is misleading on its own. The actual state-AI capital base is the sum of three overlapping layers (industrial-policy funds, provincial and municipal capital, in-house SOE AI investment) that together dwarf the $150 billion editorial estimate. Underneath the capital sits a parallel compute-supply-chain build-out, designed from the start to be operable under the most aggressive plausible US export-control regime. Above the capital sits a regulatory architecture (CAC approval, foundation-model registration, MIIT industrial AI rules) that gates which models can be deployed and how. The three layers function as one coordinated state stack.

The capital architecture

China's state-AI capital runs through three nested vehicles.

The first and largest is the National IC Industry Investment Fund (国家集成电路产业投资基金), commonly called the Big Fund (大基金). The Big Fund is the state vehicle for semiconductor industrial policy and has run three phases. Phase 1, established in September 2014, raised approximately RMB 138.7 billion (roughly $21 billion at the time). Phase 2, established in October 2019, raised approximately RMB 204 billion (roughly $29 billion). Phase 3, established on May 24, 2024, raised approximately RMB 344 billion (approximately $47-48 billion). Cumulative across the three phases: approximately $97-98 billion in committed state capital. The Big Fund's mandate is semiconductor industry broadly (logic fabrication, memory, equipment, materials, advanced packaging). AI-specific accelerators are one priority category among several. Phase 3's announced focus tilts more heavily toward advanced packaging (HBM), EUV-substitution lithography research, and AI-specific silicon than Phases 1 and 2 did. The Big Fund is the structural reason Chinese frontier-LLM training can be funded with state capital that the disclosed-rounds dataset does not capture: most of the relevant capital flows into chip suppliers (SMIC, YMTC, CXMT, Cambricon) and from there into the cost structure of the labs that use them.

The second layer is provincial and municipal AI capital. The disclosed-rounds dataset captures Shanghai State-owned Capital Investment Co. (Shanghai SOIC) leading two rounds in 2025-2026 (MiniMax in July 2025, StepFun's Series B+ at approximately $717 million in January 2026). Shanghai SOIC is one of at least four major municipal vehicles operating at the AI-frontier scale. Beijing announced the Beijing AI Industry Investment Fund (北京市人工智能产业投资基金) in May 2025 with a target of RMB 100 billion (approximately $14 billion) over ten years. Shenzhen Capital Group (深圳市创新投资集团), one of the longest-running municipal venture vehicles in China, has accelerated AI-specific allocations since 2023. The Anhui Provincial AI Industry Fund, announced in 2024, targets approximately RMB 10 billion. Hangzhou-affiliated funds support the Alibaba ecosystem. Suzhou Industrial Park venture vehicles serve the Yangtze Delta cluster. The aggregate municipal-and-provincial AI capital commitment, summed across these vehicles and the dozen smaller programs, sits in the multi-tens-of-billions-of-dollars range and is poorly tracked in any single source.

The third layer is state-owned-enterprise in-house AI investment. China's central SOEs (the SASAC-controlled corporations under direct national-government ownership) have all received explicit guidance to accelerate AI integration since 2023. The result is an "invisible" AI capital base that runs through ordinary R&D and capex line items rather than venture rounds. China Mobile (中国移动) operates the Jiutian (九天) foundation model series and runs annual AI-specific capex estimated in the multi-billion-dollar range; the company's overall 2024 capex was approximately RMB 165 billion, with AI compute and ChinaMobile Cloud absorbing a growing share. China Telecom (中国电信) operates the TeleChat (星辰) foundation model. China Unicom (中国联通) operates Yuanjing (元景). The four major state-owned banks (ICBC, CCB, BoC, ABC) deploy AI internally and partner with both Alibaba Qwen and DeepSeek for customer-facing applications. CETC (中国电科) is the state defense electronics conglomerate; its AI work is substantial and largely opaque. CEC (China Electronics Corporation) holds the state IT brief. State Grid runs smart-grid AI applications. Sinopec and China National Petroleum deploy industrial AI in refining and exploration. Conservative estimates put the aggregate central-SOE AI capex for 2024-2026 above $100 billion, which is more than ten times the disclosed independent-lab funding tracked in the Atlas, and roughly comparable to the Big Fund Phases 1-2 combined. None of this appears as venture rounds.

The three layers are coordinated, not parallel. The Big Fund underwrites the chip suppliers. Provincial and municipal funds underwrite the commercial frontier labs. SOE capex absorbs the resulting models and compute into productive deployment across the broader Chinese economy. The total disclosed-or-estimable Chinese state-AI capital base for the 2024-2026 cycle sits comfortably in the $250-400 billion range, with the wide range reflecting the imprecision of editorial estimates on the SOE and provincial layers. The single $150 billion headline figure understates the actual scale by at least 50% and probably more.

The compute architecture

China's state-AI capital architecture exists on top of a compute supply chain that has been deliberately remodelled to operate under hostile US export controls. The export-control regime has tightened in four major iterations since 2022, and the Chinese response has tracked each one.

October 7, 2022. The US Bureau of Industry and Security (BIS) implements the foundational chip restriction. Nvidia's H100 and A100 are barred from China at full performance. The immediate Chinese response is the legal-export workaround: Nvidia ships H800 and A800 variants designed specifically to fall under the licence threshold.

October 17, 2023. BIS expands the rule. The H800 and A800 paths are closed. Nvidia introduces the H20, a further-down-spec variant that remains legally exportable. Chinese hyperscalers (ByteDance, Alibaba, Tencent, Baidu) shift to the H20 as the workhorse imported chip.

December 2024. BIS adds high-bandwidth memory (HBM) to the licence regime for shipments to Huawei, closing the Korean and Micron supply paths that Huawei had been using to source the HBM3 needed for Ascend 910C. The Korean memory makers (Samsung, SK Hynix) are required to apply for licences for any Huawei-bound shipment.

April 2025. BIS adds the H20 to the licence requirement. The legal-export workaround is closed. Chinese frontier labs operating on imported Nvidia silicon face a choice between accepting reduced performance (the L40S, certain inference-optimised chips that remain exportable), running at smaller scale, or pivoting fully to domestic alternatives.

The domestic alternatives have been accelerating in parallel. Huawei Ascend is the dominant domestic line. The 910B, shipping since 2023, fabricates on SMIC's N+2 process (nominally 7nm-equivalent) and benchmarks as a rough peer to the Nvidia A100 80GB at training tasks. The 910C, shipping since 2025, uses a dual-die package and approaches H100 throughput at FP16 while trailing on memory bandwidth. The 910D was announced in Q1 2026 with limited public specifications. Cambricon (寒武纪) ships the Siyuan/MLU 290, 370, 590, and 690 lines and is publicly listed on Shanghai's STAR Market. Biren (壁仞) was founded by ex-AMD and ex-Tencent executives and ships the BR100 and BR104. Moore Threads (摩尔线程) was founded by the former head of Nvidia China and ships the MTT S4000 and S5000. Iluvatar CoreX (天数智芯) ships the Tiangai line. Enflame (燧原) ships the DTU. Hygon (海光信息) ships the DCU line, with an AMD-licensed lineage that has drawn US regulatory attention. Alibaba T-Head (平头哥) ships the Hanguang line. The aggregate domestic-Chinese AI accelerator output is now in the millions of units annually, dominated by Huawei and Cambricon, with the remainder distributed across the smaller specialist makers.

The architecture has two structural constraints. The first is fabrication: SMIC, the primary domestic foundry, lacks EUV lithography (ASML's monopoly product, export-restricted to China since 2019) and is therefore constrained to multi-patterning DUV processes for advanced nodes. SMIC's N+2 process is the leading-edge for Ascend production and is fundamentally limited in yield and density relative to TSMC's leading-edge nodes. The second is high-bandwidth memory: the December 2024 BIS rule closed the Korean HBM3 supply to Huawei, forcing a pivot to domestic HBM production by Changxin Memory Technologies (CXMT) and Wuhan Xinxin (XMC). Domestic Chinese HBM at HBM3-equivalent specifications is shipping at low volumes as of mid-2026 and is the single tightest constraint on Chinese frontier compute capacity.

The strategic logic is consistent: build a complete domestic stack (logic, memory, packaging, accelerator design, system integration) that can operate at frontier scale even if every external supply path is closed. The stack is not yet operationally equivalent to the US-Korean-Taiwanese-Dutch alternative on a per-watt or per-dollar basis. It is operationally adequate to keep Chinese frontier-LLM training going at a scale that produced DeepSeek V3, Moonshot's Series F-stage models, and the 2026 Alibaba Qwen and Tencent Hunyuan releases. That adequacy is the geopolitically consequential fact; the efficiency gap is the secondary one.

The regulatory architecture

The third layer of the Chinese AI state stack is the regulatory architecture. The state's role is not exclusively a capital-and-compute one. It is also a gatekeeping one that determines which models can be deployed publicly, on what terms, and with what content restrictions.

The foundational document is the Generative AI Services Provisional Measures (生成式人工智能服务管理暂行办法), published by the Cyberspace Administration of China (CAC) on August 15, 2023 and effective the same day. The Measures require: registration with the CAC before public deployment, content moderation aligned with "socialist core values," traceable training data sources, periodic security evaluation, and compliance with personal-data protection rules. Models are subject to licensing and approval at multiple layers (national CAC, provincial and municipal CAC offices, the Ministry of Industry and Information Technology where industrial deployment is involved). The approval process requires submitting model architecture details, training-data provenance, content-moderation pipeline specifications, security-evaluation results, and (for non-domestic-language models) a Chinese-language deployment configuration.

The CAC has published successive batches of approved generative-AI services since the Measures took effect. By mid-2024 the approved-services list had cleared 190 entries. By late 2025 the list reportedly exceeded 300. The approval lists include both the major frontier labs (DeepSeek, Moonshot, Z.AI, MiniMax, StepFun, Alibaba Qwen, Tencent Hunyuan, Baidu Ernie, ByteDance Doubao, iFlytek Spark) and a long tail of vertical and application-layer services. The approval is not a single gate but a continuing relationship; updated model versions require re-registration, and major capability upgrades require additional security evaluation.

Parallel to CAC approval, the MIIT industrial-AI rules govern AI deployment in critical-infrastructure sectors (energy, finance, transport, manufacturing, healthcare, telecommunications). MIIT operates an industrial AI integrated pilot zone framework that grants regulatory privilege to designated cities and industrial zones. The MIIT framework is what enables the rapid SOE adoption of foundation models described in the capital section: state-owned banks, telecom operators, and energy companies can deploy AI capability under MIIT supervision without the full cycle of CAC approval that consumer-facing services require.

The 2024-2025 update cycle has tightened deep-fake content rules, formalised the foundation-model registration regime as a recurring (rather than one-time) obligation, introduced specific provisions for AI-generated content labelling, and added security-review requirements for cross-border AI-service operation. The April 2026 round of CAC notices included guidance on integrating large language models with state-procurement systems and on the security evaluation of agent-style AI deployments. The regulatory architecture is in continuous evolution and the cost of compliance has grown materially since 2023.

The strategic effect of the regulatory layer is to make Chinese AI deployment selectively legible to the state in ways that no other major sovereign-AI program achieves. The CAC and MIIT know which models are deployed where, what data they were trained on, what capabilities they have, and what content controls are in place. This visibility is what enables the integrated planning across capital, compute, and procurement that distinguishes the Chinese model from the US distributed-ecosystem model. The Chinese state functions as the primary system integrator for the national AI deployment, with funding being only one of the levers it operates.

What ties the three layers together

The Big Fund underwrites the chip suppliers that make Ascend, Cambricon, and the domestic HBM makers possible. The provincial and municipal funds underwrite the commercial frontier labs that deploy onto those chips. SOE capex absorbs the resulting models and compute into productive deployment. CAC approval gates which models can be deployed where. MIIT supervision enables the SOE absorption to happen at scale. The Generative AI Measures keep the entire stack legible to central authority.

This is a coherent national capability architecture in a way that the US distributed-ecosystem model is not, the chaebol-coordinated model is only partially, and the petrostate national-champion model is at smaller scale. The Chinese state-AI architecture is the single most ambitious sovereign-AI program in the dataset. It is also the most opaque from outside, the most rapidly evolving, and the program whose long-run capability ceiling is hardest to estimate. The combination of capital scale, compute substitution, and regulatory integration produces national AI capability at a system level that is meaningfully different from what any other country in the dataset is currently building.

The chaebol-coordinated model

South Korea's sovereign-AI strategy is only describable through the chaebol structure.

The dataset records 8 Korean labs: LG AI Research, Naver Cloud, SK Telecom, Samsung Research SAIT, Kakao Brain, Upstage, KAIST, and ETRI. Four of those are corporate research labs inside the major chaebol groups (LG, Naver, SK, Samsung). Total disclosed government AI funding for Korea sits at approximately $4 billion, which is small by MENA standards but underestimates the actual sovereign-AI deployment because most of it runs as in-house chaebol R&D rather than through state programs.

The Korean strategy is the K-Sovereign AI initiative coordinated by the Ministry of Science and ICT. The headline programme is the Proprietary AI Foundation Model Project, which committed KRW 530 billion (approximately $400 million) through 2027 with K-EXAONE (LG's foundation model) as a principal funded model. The state acts as coordinator and procurement guarantor rather than primary capital provider. The actual capital comes from the chaebol balance sheets.

The advantage of the chaebol model is that the capital is patient and the customer integration is built-in. LG AI Research's K-EXAONE deploys directly into LG Electronics products. Naver Cloud's HyperCLOVA X serves Naver's domestic Korean services. Samsung SAIT's research filters into Samsung's mobile and semiconductor businesses. The disadvantage is that the model produces AI capability tied to specific corporate strategies rather than independent commercial frontier labs. Upstage at $1 billion post-money is the closest Korean independent insurgent and is several orders of magnitude smaller than the US frontier seven.

Japan runs an adjacent version through METI's GENIAC programme (compute subsidies of approximately $200 million directly to model-developing startups including Preferred Networks and Sakana AI) and corporate-research incumbents (Sony AI, Honda Research, Fujitsu Research). Sakana AI is Japan's most-watched independent insurgent and is the closest Japanese analogue to the kind of independent frontier-adjacent lab the country has historically not produced.

The chaebol-coordinated model produces durable AI capability inside major industrial groups. It does not produce a frontier-LLM independent that competes globally. South Korea and Japan have so far accepted this trade-off.

The French public-private champion and the distributed alternatives

France is the only European country that has explicitly chosen the public-private champion model and committed to it across multiple administrations and capital sources.

Mistral AI was founded in May 2023 by Arthur Mensch (ex-Google DeepMind), Guillaume Lample (ex-Meta AI / FAIR), and Timothée Lacroix (ex-Meta AI / FAIR). By September 2025 it had closed a $2 billion Series C at $14 billion led by ASML. France 2030 committed approximately €7 billion to AI through 2030 across all programs. Xavier Niel's Iliad personal-fortune funding bridged gaps that pure venture capital would not have filled. Around Mistral, France runs a denser public-private AI ecosystem than its $7.5 billion government commitment suggests: Kyutai is the nonprofit research arm jointly funded by Niel, Saadé, and Eric Schmidt; AMI is Yann LeCun's post-Meta Paris lab, which closed a $1 billion seed in March 2026; Hugging Face retains a substantial French research footprint; Inria provides the state research base. The 1.3-to-1 government-to-private ratio is the highest among large Western European economies.

The UK is the partial case where Google DeepMind's London headquarters and the AI Safety Institute substitute for an independent national champion that the country has not yet identified.

The distributed-ecosystem model is the dominant alternative in the West. The US runs it at extreme scale: 107 labs, $280 billion in disclosed government AI commitment spread across CHIPS, NAIRR, NSF AI Institutes, DARPA, ARPA-H, IARPA, and the national labs. The federal government is one funder among many; the universities and academic labs train most of the talent; the private capital base does most of the commercial-product work. There is no national champion in any meaningful sense. Canada runs a smaller version through the CIFAR-coordinated Vector / MILA / Amii triangle, with Cohere as the commercial frontier flagship. Germany combines academic-research depth (Max Planck, Tübingen, DFKI), open-source contributions (LAION published the LAION-5B dataset that anchored Stable Diffusion), and a small commercial-frontier layer (Aleph Alpha, Black Forest Labs, DeepL). Switzerland anchors the EPFL-ETH joint Swiss AI Initiative around the ALPS sovereign supercomputer.

The distributed model produces depth, breadth, and durability. It does not produce concentrated frontier-scale state-coordinated capability. The US wins both the distributed and concentrated games because of the size of its private market; the smaller distributed-model countries (Canada, Germany, Switzerland) win the academic and ecosystem game while ceding most commercial frontier capability to the US.

The mission-oriented programs

A separate pattern shows up in countries pursuing AI sovereignty through narrowly defined missions: language coverage, regional positioning, or specific applications. India's IndiaAI Mission was approved in March 2024 with INR 10,372 crore (approximately USD 1.2 billion) over five years, anchored by Krutrim, Sarvam AI, and AI4Bharat. Singapore runs the AI Singapore programme through the National Research Foundation, with SEA-LION as the Southeast-Asian-language model and Reka AI as the most prominent commercial insurgent. Vietnam runs a smaller version anchored by VinAI Research.

The mission-oriented programs produce regional language and applications capability that is structurally underprovided by the global frontier. India's strategic upside is large precisely because the language-coverage gap is large.

The supply chain underneath

Every sovereign-AI program in the dataset depends on a supply chain that is concentrated in a small number of upstream suppliers: Nvidia GPUs, TSMC fabrication, ASML lithography, Korean memory (Samsung HBM, SK Hynix HBM). The MENA petrostate champions, the Korean chaebol, the French public-private champion, the US distributed ecosystem, and the Indian mission-oriented programs all depend on this stack.

Three "non-AI" countries are structurally critical to global AI capability: Taiwan (TSMC fabrication monopoly on advanced-node logic), the Netherlands (ASML EUV-lithography monopoly), and Korea separately (Samsung and SK Hynix HBM manufacturing leadership). Every Nvidia H100, H200, B100, and B200 contains Korean HBM. The supply-chain concentration creates a paradox at the heart of every sovereign-AI program except China's: national AI sovereignty is being pursued by national strategies that all depend on the same handful of companies operating in jurisdictions outside the country claiming sovereignty.

China is the exception. The Chinese state-AI architecture is the only sovereign-AI program in the dataset that has built (or is in advanced stages of building) a parallel domestic supply chain across logic fabrication (SMIC), accelerator design (Huawei, Cambricon, Biren, Moore Threads, et al.), high-bandwidth memory (CXMT, XMC), packaging, and system integration. The parallel stack is not yet operationally equivalent to the global stack on a per-watt or per-dollar basis. It is operationally adequate to maintain frontier-LLM training capacity even if every external supply path is closed. That asymmetry is consequential: if the US-China decoupling continues to deepen, China is the only major economy whose AI program would survive a worst-case supply-chain disruption.

The remaining sovereign-AI programs face structural choices. Saudi Arabia and the UAE are negotiating direct access guarantees with Nvidia and Microsoft. France, Korea, Japan, and the EU are pursuing partial domestic-fab build-out (the European Chips Act is the largest single response). India is in earlier stages of a domestic-compute strategy. None of these efforts is currently on track to produce supply-chain independence comparable to what China has built. The next phase of sovereign-AI strategy globally is the question of whether the China precedent gets emulated or whether the rest of the world doubles down on shared dependence on the existing US-Korean-Taiwanese-Dutch supply chain.

Why now

Three things converged between 2022 and 2025 to produce the current sovereign-AI moment.

First, ChatGPT's November 2022 launch made AI capability legible to government leaders in a way that prior AI had not been. The political-economic visibility of AI changed in ways that previous capability advances had not produced.

Second, the US-China decoupling reshaped what other countries had to think about. Once Washington's export controls explicitly targeted Chinese AI capability and Beijing began organising domestic alternatives, every other country had to choose how to position itself in a world where the two largest AI capital pools were structurally competing rather than cooperating. The MENA petrostates chose explicit US alignment with strategic hedging through scale. France chose European positioning through Mistral. India chose language-coverage neutrality. Korea and Japan chose chaebol-anchored alignment with US compute supply.

Third, sovereign wealth funds in the Gulf and East Asia matured into asset managers explicitly looking for 21st-century allocations to replace 20th-century commodity exposures. PIF, Mubadala, MGX, ADQ, GIC, Temasek, KIA, the Korean National Pension Service, and the Norwegian Government Pension Fund all needed strategic allocations to AI for sovereign-relevance reasons on top of straight financial-return reasons. The capital was waiting; the AI category arrived; the deployment followed.

The combination of legibility, decoupling, and capital availability produced the current pattern. None of the three is reversing in 2026.

What to watch

Six concrete signals over the next twelve months.

1. Big Fund Phase 3 deployment patterns. The $47-48 billion Phase 3 commitment was made in May 2024. The deployment flows through 2026-2027 will reveal how heavily the Phase 3 capital actually tilts toward AI accelerators (Cambricon, Biren, the domestic HBM makers) versus traditional fab capacity (SMIC). A Phase 3 weighted heavily toward AI silicon validates the integrated-state-stack thesis. A Phase 3 that follows the historical Big Fund pattern of fab-and-equipment focus signals that AI is being treated as a downstream beneficiary rather than a primary mandate.

2. The next major US export-control update. The April 2025 H20 restriction was the most recent major BIS action. The expected next iteration in 2026 will target HBM-equivalent capability and possibly attempt to close diversion paths through Singapore, Vietnam, and the Gulf. The scope of the next update will determine whether the Chinese parallel-supply-chain build-out remains marginal or becomes the only viable Chinese frontier path.

3. HUMAIN's first publicly visible technical output. Saudi Arabia has committed an enormous capital base to HUMAIN with relatively little visible technical output to date. The first credible HUMAIN model release, training-infrastructure announcement, or commercial product launch will validate whether the petrostate model produces frontier capability or remains primarily a capital-allocation story.

4. Mistral's next round and competitive position. The next Mistral round (expected within twelve months given the company's revenue trajectory and its strategic centrality to the French sovereign-AI thesis) will reveal whether the European public-private champion model is converging with or diverging from the US frontier valuation regime.

5. The chaebol-AI IPO question. Upstage's April 2026 unicorn round is the leading Korean independent insurgent. A 2026-2027 Korean or Japanese frontier-LLM IPO would mark the chaebol-coordinated model producing an independent commercial outcome at scale. Without one, the model continues to produce in-house corporate AI capability without globally competitive independent labs.

6. The next CAC regulatory turn in China. The Generative AI Measures have been in continuous evolution since August 2023. The expected 2026-2027 updates will address agent-style AI deployment, cross-border AI service operation, and the integration of AI into state-procurement systems. The direction and pace of that evolution will reveal whether the regulatory layer of the Chinese state-AI architecture is becoming more permissive (to encourage commercial outcome at frontier scale) or more restrictive (to consolidate state control over deployment).

The honest summary

Sovereign AI in 2026 covers four overlapping playbooks producing different kinds of national capability under different governance structures. The MENA petrostates have built a national-champion model with capital scale that rivals the largest US frontier labs and US-partner integration that resolves the geopolitical exposure. France has built a public-private champion model around Mistral that is the only credible European frontier challenger. Korea and Japan have built chaebol-coordinated models that produce durable corporate AI capability without independent frontier insurgents. The US, Canada, Germany, and Switzerland have built distributed-ecosystem models that produce depth and breadth. India, Singapore, and Vietnam have built mission-oriented programs that target specific gaps in language coverage and applications.

The fifth model, China's state-architectural integration, sits in its own category. The Chinese AI state combines three coordinated layers: industrial-policy capital (Big Fund, provincial and municipal vehicles, SOE in-house investment), a parallel compute supply chain designed from the start to operate under hostile US export controls (Huawei Ascend, Cambricon, SMIC, domestic HBM), and a regulatory architecture (CAC approval, MIIT industrial-AI rules, foundation-model registration) that gates deployment and keeps the entire stack legible to central authority. The combined state-AI capital base is in the $250-400 billion range, two-to-three times the disclosed editorial figure of $150 billion. The compute architecture is operationally adequate to maintain frontier-LLM training even under worst-case supply-chain disruption. The regulatory architecture enables system-level integration that no other country in the dataset has achieved.

The aggregate disclosed government AI commitment of $608 billion (exceeding the $490 billion in disclosed private AI funding) is the analytical event the headline number captures. The four-playbook structure is what gives the headline its actual shape. The Chinese state stack is the single most consequential program inside that structure and is the program that Western analysis treats most superficially.

Underneath all of it sits the same supply chain for everyone except China: Nvidia compute, TSMC fabrication, ASML lithography, Korean memory. National AI sovereignty in 2026 is sovereignty over policy, capital allocation, and commercial outcome, conditional on a small number of suppliers operating in jurisdictions other than the country claiming sovereignty. China is the only major economy whose sovereign-AI program is structurally insulated from that condition. Whether the next phase of sovereign-AI strategy globally moves to address that asymmetry, by emulation (parallel domestic supply chains) or by alliance (formal compute-access guarantees), is the question that determines how durable the current configuration is over a 5-10 year horizon.

The Atlas dataset captures the present-tense picture as of April 2026. The next refresh in late July will tell us whether the trends in this essay are accelerating, plateauing, or reorganising. The data is the spine. The shape of the next two years of sovereign-AI strategy is the open question.


Sources used in this piece:

  • The Nextomoro Atlas dataset (sovereign-map.json, sovereign-map-by-region.json, labs.json, rounds.json) extracted 2026-04-30, covering 21 countries and 14 with disclosed government AI commitment.
  • Country-level dossiers in the Atlas for each country cited.
  • US Bureau of Industry and Security (BIS) Federal Register notices for the October 2022, October 2023, December 2024, and April 2025 export-control updates.
  • The Cyberspace Administration of China (CAC) publications for the August 2023 Generative AI Services Provisional Measures and subsequent batches of approved generative-AI services through 2025.
  • Public reporting on the National IC Industry Investment Fund (Big Fund) Phases 1-3 from State Council and Ministry of Industry and Information Technology announcements; phase-by-phase capital figures from the original founding documents.
  • The Microsoft-G42 April 2024 strategic investment announcement and the May 2025 Saudi-US summit announcements covering Microsoft and Nvidia commitments to HUMAIN.
  • The Sénat April 2024 report on France's AI sovereignty.
  • The IndiaAI Mission March 2024 cabinet approval documents (INR 10,372 crore commitment).
  • South Korea's K-Sovereign AI initiative announcements and the Proprietary AI Foundation Model Project filings.
  • China Mobile, China Telecom, China Unicom 2024 and 2025 annual reports for the SOE in-house AI capex picture.
  • The companion essays "Vintages: how AI's funding cycle decoupled from reality" and How China funds its AI labs for the venture-cycle and funding-mechanics frames this piece extends.

Last updated: April 30, 2026. Government AI funding figures are editorial rollups of disclosed national programs, sovereign-wealth commitments, and agency line items where consistently reported. Chinese SOE AI capex estimates are derived from disclosed total capex figures with assumed AI-share allocations; the actual figures are not separately reported in any single primary source. The estimated $250-400 billion range for total Chinese state-AI capital is editorial and is intended to capture the order of magnitude of the underlying commitment rather than to claim precision. Send corrections.

About the author
Nextomoro

Nextomoro

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.