Global AI Governance Regime Complex: Carnegie Endowment’s Vision for Multilateral AI Regulation

📌 Key Takeaways

  • No single institution can govern AI: The Carnegie Endowment argues that a “regime complex” of overlapping multilateral arrangements is the only viable path for global AI governance.
  • Four core governance functions: The report identifies building scientific understanding, setting standards, sharing access and benefits, and promoting collective security as the pillars of AI governance.
  • Geopolitical fragmentation is real: The US, EU, and China pursue fundamentally different AI regulatory philosophies — market-driven, rights-driven, and state-driven — making universal consensus unlikely.
  • IAEA model has serious limitations: Unlike nuclear materials, AI technologies are widely distributed and general-purpose, making inspection-based verification impractical.
  • Developing nations risk being left behind: With 2.6 billion people still unconnected to the internet, benefit-sharing and capacity-building must be central to any global AI governance framework.

Why a Single Global AI Institution Cannot Work

When OpenAI co-founders proposed an “IAEA for superintelligence” in the spring of 2023, the idea captured imaginations worldwide. United Nations Secretary-General António Guterres quickly endorsed the concept, and for a brief moment, the international community seemed to coalesce around the vision of a single, authoritative body that could oversee the development and deployment of artificial intelligence globally. Less than a year later, that vision had already faded.

The Carnegie Endowment for International Peace, in its landmark report Envisioning a Global Regime Complex to Govern Artificial Intelligence authored by Emma Klein and Stewart Patrick, makes a compelling case for why no single global institution can effectively govern AI. The challenges are simply too multifaceted, the relevant actors too varied, and the geopolitical landscape too fractured for any one-size-fits-all solution. Instead, the report argues that the world is heading toward — and should actively cultivate — a “regime complex” for AI governance.

This analysis matters because the stakes are extraordinarily high. Artificial intelligence promises to transform medicine, combat climate change, alleviate poverty, and enhance worker productivity. Simultaneously, it threatens to enable political interference through misinformation, entrench algorithmic bias, facilitate mass surveillance, displace millions of workers, and even lower barriers to developing biological, chemical, and nuclear weapons. Understanding how the international community plans to navigate these competing pressures is essential for policymakers, business leaders, and citizens alike.

The UN High-Level Advisory Body on AI (HLAB), comprising 39 expert members, released its preliminary report in December 2023. Notably, it already presumed that multiple institutions — not one — would be needed. This recognition underscores the fundamental complexity of governing a technology that touches virtually every sector of the global economy and every dimension of human life. Explore more interactive analyses of AI policy reports in our library.

Understanding the AI Regime Complex Framework

The concept of a “regime complex” is not new to international relations theory, but applying it to artificial intelligence governance represents a significant analytical contribution. Carnegie defines a regime complex as “a collage of overlapping multilateral arrangements involving different actors, functions, and principles that facilitate international cooperation.” The climate change governance ecosystem provides perhaps the most instructive precedent.

Consider the sprawling architecture of climate governance: the UN Framework Convention on Climate Change, the Montreal Protocol, the Intergovernmental Panel on Climate Change (IPCC), the Green Climate Fund, the World Meteorological Organization, the International Energy Agency, the G20, the Major Economies Forum, the C40 Cities coalition, and the Glasgow Financial Alliance for Net Zero all operate simultaneously, each with different memberships, mandates, and mechanisms. Together, they constitute a regime complex that, while imperfect, has proven far more adaptable and resilient than any single institution could be.

Carnegie identifies three key advantages of a decentralized approach for AI governance. First, AI governance must make simultaneous progress on several fronts, making a division of labor not just appropriate but essential. Second, the variety of forums permits selectivity in membership depending on the specific issue being addressed — some discussions require universal participation while others benefit from smaller coalitions of like-minded states. Third, regime complexes can advance governance without the laborious process of negotiating multilateral treaties or establishing formal organizations.

However, the report is candid about the disadvantages as well. Fragmentation can lead to incoherence, gaps, and redundancy across overlapping institutions. Competing forums can exacerbate competitive dynamics and enable “forum shopping,” where states choose the venue most favorable to their preferred outcomes. Perhaps most critically, no authoritative institution exists to orchestrate the various actors and ensure they complement rather than contradict one another.

The report consolidates the many functions of AI governance into four broad categories: building scientific understanding, setting standards and harmonizing regulations, sharing access and benefits, and promoting collective security. Each category draws on distinct institutional analogies and faces unique challenges shaped by geopolitical realities.

Building Scientific Understanding of AI Risks and Benefits

Perhaps the most foundational pillar of any global AI governance framework is the establishment of an authoritative intergovernmental mechanism for synthesizing and sharing the latest scientific and technological breakthroughs related to artificial intelligence. Without a shared understanding of what AI can and cannot do, the risks it poses, and the pace of its advancement, meaningful governance becomes impossible.

The UK AI Safety Summit in November 2023 represented an important step forward, with 28 nations agreeing to commission an international State of the Science Report led by computer scientist Yoshua Bengio. This initiative reflects a growing consensus that the international community needs something analogous to what the IPCC has provided for climate science — regular, authoritative assessments that can inform policy decisions.

Carnegie examines several institutional models for building scientific understanding. The IPCC, established in 1988 with 195 member states, produces comprehensive scientific assessments every six to seven years involving thousands of scientists worldwide. Its policy-neutral stance and rigorous peer-review process have made it the gold standard for science-policy interfaces. However, the IPCC model has significant limitations when applied to AI.

The most critical limitation is speed. The IPCC’s multiyear assessment cycles were designed for a field where the fundamental science evolves over decades. Artificial intelligence, by contrast, undergoes transformative breakthroughs in months. A governance mechanism that takes six years to produce an assessment would be perpetually outdated. The report suggests that a more agile approach is needed, perhaps drawing on the Montreal Protocol’s three specialized assessment panels that can provide targeted, rapid analyses.

Another crucial question is governance structure. Should an AI science body be intergovernmental, like the IPCC, or multistakeholder? Since the private sector leads AI research and development, any credible scientific assessment mechanism needs standing arrangements with major technology platforms. The Global Partnership for AI (GPAI), launched in June 2020 with 29 nations, was initially envisioned as an IPCC-like body for AI but has not achieved an authoritative role, partly because its membership requirement of endorsing OECD AI Principles limits its expansion to countries like China.

Carnegie also proposes that an AI science body should serve as a clearinghouse for up-to-date information about advanced civilian AI research and development, analogous to the Biosafety Clearing-House under the Cartagena Protocol. This registry function would help governments and researchers maintain situational awareness of rapidly evolving capabilities.

Transform complex AI policy reports into interactive experiences your team will actually engage with.

Try It Free →

Setting International AI Standards and Harmonizing Regulations

One of the most striking findings in the Carnegie report is the surface-level unanimity in international AI declarations combined with deep substantive fragmentation. Recent pronouncements from the G7, OECD, UNESCO, G20, China, and the UK AI Safety Summit all use remarkably similar language — “ethical,” “responsible,” “trustworthy,” “human-centered,” “transparent,” “safe,” “accountable,” and “fair.” Yet this rhetorical convergence masks profoundly different domestic regulatory approaches.

Carnegie examines several institutional models for AI standard-setting, each with instructive lessons. The International Civil Aviation Organization (ICAO), established in 1947 with 193 member states, sets technical standards for international civil aviation and conducts safety and security audits. The International Maritime Organization (IMO), with 175 members, performs a similar function for international shipping. The International Labour Organization (ILO), with its unique tripartite governance structure involving governments, workers, and employers, offers yet another approach to standard-setting.

However, the report identifies fundamental limitations in applying these sector-specific models to AI. Unlike civil aviation or maritime shipping, AI is a general-purpose technology that pervades virtually every domain of human activity. This means multiple sets of standards will be needed simultaneously, and existing sectoral institutions must become “AI literate” as quickly as possible. The International Organization for Standardization (170 national standards bodies) and the International Electrotechnical Commission are already working on AI-related technical standards, but their reach remains limited.

Perhaps the most compelling institutional analogy for AI standard-setting is the Financial Action Task Force (FATF), established in 1989 by the G7 to combat money laundering and terrorist financing. FATF classifies countries as cooperating or non-cooperating, and its designations have real consequences — financial institutions systematically reduce exposure to blacklisted jurisdictions. Crucially, both the IMF and UN Security Council eventually accepted FATF standards, demonstrating how a club of like-minded nations can elevate norms to a near-universal level. A similar dynamic could emerge for AI standards if a coalition of leading AI nations establishes sufficiently robust standards with meaningful enforcement mechanisms.

Carnegie anticipates that AI standard-setting will proceed along two parallel tracks. Universal approaches might involve the UN General Assembly negotiating a declaration of principles analogous to the Universal Declaration of Human Rights, establishing baseline norms that apply to all nations. Simultaneously, minilateral approaches — such as the G7’s Hiroshima Process Code of Conduct — would pursue higher standards among like-minded states, treating universal norms as a floor rather than a ceiling. Discover how leading institutions approach AI regulation and compliance through our curated analyses.

How the EU AI Act Shapes Global AI Governance

No analysis of global AI governance would be complete without examining the three dominant regulatory philosophies that are shaping — and constraining — international cooperation. Carnegie identifies three distinct orientations toward the global digital order: the United States’ market-driven approach, the EU’s rights-driven framework, and China’s state-driven model.

The EU AI Act, provisionally approved in December 2023 and passed in March 2024, represents the most comprehensive attempt at legally binding AI regulation to date. It employs a risk-based approach that categorizes AI applications by the level of risk they pose to fundamental rights, democracy, the rule of law, and environmental sustainability. The Act bans outright certain practices deemed unacceptably risky, including social scoring systems, emotion recognition in workplaces and schools, and biometric categorization based on sensitive characteristics. Violations carry significant financial penalties.

China’s approach, exemplified by its Generative AI Measures effective since August 2023, is equally binding but driven by fundamentally different priorities. Beijing’s regulations focus heavily on information control, explicitly restricting AI-generated content related to “subversion of state power.” Interestingly, some regulations have been relaxed to avoid stifling innovation in a sector China views as strategically vital, revealing the tension between control and competitiveness that all governments face.

The United States, by contrast, has relied primarily on voluntary industry commitments. In July 2023, seven major AI companies agreed to voluntary safeguards, followed by President Biden’s Executive Order on Safe, Secure, and Trustworthy AI in October 2023. This approach privileges innovation and market dynamism but leaves significant governance gaps that would require congressional legislation to fill. The lack of comprehensive federal AI legislation creates uncertainty for both domestic companies and international partners attempting to align standards.

India has positioned itself as a voice for the Global South in AI governance discussions, having released its national AI strategy in 2018 and proposed its Digital India Act in 2023. The country’s stance reflects the reality that many developing nations are simultaneously eager to adopt AI for development and wary of governance frameworks that might entrench the dominance of existing technology leaders.

These divergent approaches create what Carnegie describes as a fundamental structural challenge for global AI governance. Unlike climate change — where the science is shared even if the policy responses differ — AI governance involves disagreements that extend to basic values, priorities, and the role of the state in regulating technology. Any viable regime complex must find ways to bridge these divides or, at minimum, manage them productively.

Sharing AI Access and Benefits with Developing Nations

One of the most urgent dimensions of global AI governance — and one that receives insufficient attention in many policy discussions — is ensuring that AI’s transformative benefits reach the developing world. The statistics are stark: approximately 2.6 billion people, roughly one-third of the global population, remain unconnected to the internet. The world is badly off track in meeting the UN Sustainable Development Goals by 2030, and AI could either accelerate progress or deepen existing inequalities.

Carnegie documents impressive existing efforts in AI for development. The UN Development Programme has deployed AI tools for hate speech identification in Sudan, electoral misinformation detection in Zambia and Honduras, policy evaluation in Mexico, and optimizing cash transfers in Togo and Bangladesh. The UN Office of the High Commissioner for Refugees uses predictive analytics for displaced people in Somalia and refugees from Venezuela arriving in Brazil. The UK-led AI for Development Programme has mobilized $100 million from the United States, Canada, and the Bill and Melinda Gates Foundation to focus on the African continent.

The report examines two institutional models for benefit-sharing, each with different implications. The first model, drawn from global health partnerships, emphasizes relatively unconditional access. Gavi, established in 2000 by the Gates Foundation, WHO, UNICEF, and the World Bank, expands immunization coverage in low-income countries by negotiating prices with manufacturers and sharing costs with governments. The Global Fund to Fight AIDS, Tuberculosis, and Malaria, established in 2002, allocates funding based on country proposals through multistakeholder consultations.

The second model, drawn from the Nuclear Non-Proliferation Treaty (NPT), makes access conditional on compliance. Article IV of the NPT establishes an “inalienable right” of all parties to develop peaceful nuclear energy, but this right is contingent on accepting IAEA safeguards. Applied to AI, this model would offer developing countries access to AI tools, training data, and infrastructure in exchange for commitments to responsible use and governance standards.

Carnegie raises a critical concern about data extraction that echoes broader debates about digital colonialism. AI models are not always trained on globally representative data, and the process of accessing diverse datasets from developing countries is “inherently extractive.” The report points to the Nagoya Protocol on Access and Benefit-sharing, which governs the fair sharing of benefits from genetic resources, as a potential model for ensuring that developing countries receive equitable returns when their data contributes to AI systems developed elsewhere.

Make your governance reports accessible and engaging. Transform static PDFs into interactive experiences in minutes.

Get Started →

AI Collective Security Threats and Military Applications

Perhaps the most alarming section of the Carnegie report addresses the collective security implications of artificial intelligence. The authors identify three ways AI threatens collective security that extend well beyond the widely discussed issue of lethal autonomous weapons systems (LAWS).

First, AI increases the availability and lethality of all weapons, including weapons of mass destruction. By lowering the technical barriers to developing novel pathogens, chemical weapons agents, and sophisticated malware, AI could enable actors — both state and non-state — to develop capabilities that were previously beyond their reach. The democratization of destructive capacity represents a qualitative shift in the threat landscape that existing arms control frameworks were not designed to address.

Second, AI exacerbates geopolitical competition and accelerates arms race dynamics. The combination of automated warfare systems and AI-enabled disinformation capabilities increases the risk of conflict through escalation, miscalculation, and loss of command and control. When decision-making cycles compress from days to milliseconds, the opportunities for human judgment and de-escalation shrink proportionally.

Third, the report acknowledges the potential existential risks from superintelligent AI, though it treats these as longer-term concerns. The concern is not merely hypothetical — in March 2023, over 1,000 technologists signed an open letter calling for a six-month pause in training systems more powerful than GPT-4. By March 2024, that letter had attracted more than 33,000 signatures, reflecting widespread anxiety within the AI research community itself about the pace of development outstripping safety measures.

The fundamental barrier to cooperation on AI security is zero-sum thinking. Nations perceive AI superiority as a strategic imperative, and commercial competition between technology companies creates selection pressures that sacrifice safety for innovation speed. Carnegie’s analysis suggests that this dynamic will be extraordinarily difficult to overcome without significant shifts in how both governments and corporations perceive the risks of uncontrolled AI development. For more analyses of AI’s intersection with national security, explore our interactive library of policy reports.

Lethal Autonomous Weapons and International Humanitarian Law

Lethal autonomous weapons systems have already reached modern battlefields, with deployments documented in both Ukraine and Gaza. These systems are increasingly contributing to use-of-force decisions, raising urgent questions about accountability, proportionality, and compliance with international humanitarian law.

Carnegie’s assessment of prospects for a comprehensive LAWS treaty is notably pessimistic. The United States and other major military powers have consistently resisted binding restrictions on autonomous weapons development, viewing these systems as essential to maintaining strategic advantage. However, the report identifies more promising avenues in efforts to establish principles of use and clarify how existing international humanitarian law applies to AI-enabled military systems.

The report proposes confidence-building measures (CBMs) as the most realistic near-term approach to managing military AI risks. Drawing on Cold War precedents — including the Moscow-Washington hotline, voluntary observations of military exercises, and information exchanges — Carnegie suggests several AI-specific CBMs: shared testing and evaluation standards for autonomous systems, information exchanges about AI-enabled system deployments, clarification of the expected behavior of autonomous systems in various scenarios, and dedicated communication channels between major AI military powers.

The UN Office for Disarmament Affairs identifies five categories of confidence-building measures: communication and coordination, observation and verification, military constraints, training and education, and cooperation and integration. Applying these categories to military AI could create a foundation for deeper cooperation over time, even in the absence of binding treaties. The history of nuclear arms control suggests that confidence-building measures, while insufficient on their own, can create the trust necessary for more ambitious agreements.

AI Export Controls and the Semiconductor Arms Race

The concentration of AI capabilities provides a potential lever for governance that does not exist in many other technology domains. The United States, United Kingdom, EU, and China dominate the development of significant machine learning systems. Even more concentrated is the semiconductor supply chain: over 90 percent of specialized hardware chips are designed or produced in the United States, China, Japan, South Korea, and Taiwan.

Carnegie examines the existing architecture of multilateral export control regimes as models for AI governance. The Nuclear Suppliers Group (48 members), the Australia Group for biological and chemical weapons (42 members plus the EU), the Missile Technology Control Regime (35 countries), and the Wassenaar Arrangement for conventional arms and dual-use technologies (42 nations) all provide precedents for controlling the spread of sensitive technologies among like-minded nations.

The US Commerce Department’s October 2022 export controls limiting China’s access to advanced computing chips, supercomputers, and advanced semiconductors represented a watershed moment in technology governance. Updated in October 2023 and being globalized through agreements with Japan and the Netherlands, these controls demonstrate the potential power of technology denial as a governance tool. Some analysts have called for updating or replacing the Wassenaar Arrangement entirely to better address AI-specific challenges.

However, Carnegie identifies several dilemmas in the export control approach. How can the international community constrain China’s AI capabilities while simultaneously incentivizing responsible behavior? Overly aggressive controls could push China toward building alternative supply chains and forming alliances with irresponsible actors. Furthermore, while controlling access to physical chips and specialized hardware is feasible, restricting the spread of AI models, algorithms, and training methodologies is far more difficult. Software can be copied, shared, and distributed in ways that physical materials cannot.

Perhaps most significantly, effective AI export controls require unprecedented government oversight of major AI industry players — a level of involvement that sits uncomfortably with the market-driven philosophy that has characterized US technology policy for decades. The tension between maintaining technological leadership and imposing meaningful controls on an industry that is simultaneously the engine of economic growth and a potential source of strategic risk remains unresolved.

Crisis Preparedness and Emergency Response for AI Risks

The final governance function Carnegie examines is crisis preparedness and emergency response — the ability of the international community to detect, warn about, and respond to AI-related emergencies before they spiral out of control.

The Financial Stability Board (FSB), established by the G20 in April 2009 in the aftermath of the global financial crisis, offers an intriguing model. The FSB develops standards for systemically important cross-border financial institutions, works with the IMF on early warning exercises, and operates as an informal arrangement without the bureaucratic constraints of a formal international organization. The HLAB specifically invoked the FSB’s “macro-prudential framework” as a prototype for what it called a “techno-prudential model” for AI governance.

In the health domain, the WHO Hub for Pandemic and Epidemic Intelligence, launched in September 2021, conducts collaborative monitoring in more than 150 countries. The WHO Global Outbreak Alert and Response Network, comprising over 250 technical institutions and networks, demonstrates how distributed monitoring systems can provide early warning of emerging threats.

The OECD currently tracks broadly defined AI incidents and is working on a direct reporting framework. This nascent monitoring capacity could evolve into a more comprehensive system for tracking AI failures, near-misses, and emerging risks across sectors and geographies.

Carnegie also draws an unusual but illuminating analogy to planetary defense — the international cooperation around detecting and responding to threats from near-Earth objects. According to US government data, approximately 1,000 near-Earth objects larger than one kilometer exist (95 percent of which have been found, with none on a collision trajectory), along with roughly 25,000 objects larger than 140 meters (of which only 42 percent have been identified). This framework of systematic search, threat assessment, information sharing, and scenario planning could inform how the international community approaches low-probability but high-consequence AI risks.

The common thread across all these models is that effective crisis preparedness requires investment before a crisis occurs. The international community must develop shared vocabularies, communication protocols, response plans, and institutional relationships while there is still time — not in the midst of an AI-related emergency. Carnegie’s regime complex framework provides a roadmap for building these capabilities incrementally across multiple institutions, rather than waiting for the perfect single institution that will never arrive.

Turn any governance report into an interactive learning experience. Engage your audience with AI-powered video summaries.

Start Now →

Frequently Asked Questions

What is a regime complex for global AI governance?

A regime complex for global AI governance is a collage of overlapping multilateral arrangements involving different actors, functions, and principles that facilitate international cooperation on artificial intelligence. Rather than relying on a single institution, this approach distributes governance across multiple bodies, each addressing specific aspects such as scientific understanding, standard-setting, benefit-sharing, and collective security.

Why can’t a single global institution govern AI effectively?

A single institution cannot govern AI effectively because the challenges are too multifaceted, the relevant actors too varied, and geopolitical dynamics too complex. AI is a general-purpose technology affecting virtually every sector. The United States, EU, and China each pursue fundamentally different regulatory philosophies — market-driven, rights-driven, and state-driven respectively — making universal consensus on a single framework unrealistic.

How does the EU AI Act compare to US and China AI regulations?

The EU AI Act uses a risk-based approach with legally binding rules and fines, banning practices like social scoring. China’s Generative AI Measures impose strict binding regulations focused on information control and state power. The United States relies primarily on voluntary industry commitments and executive orders, prioritizing innovation over binding regulation. Congressional legislation would be required for comprehensive US AI regulation.

What role could an IAEA-style organization play in AI governance?

An IAEA-style organization for AI was proposed by OpenAI and endorsed by the UN Secretary-General to inspect and verify compliance with AI safety standards. However, Carnegie’s analysis shows significant limitations: AI technologies are widely distributed general-purpose tools unlike hard-to-procure nuclear materials, existential AI risk remains theoretical without the deterrent dynamic that drove nuclear cooperation, and the US-China strategic rivalry limits treaty prospects.

How can developing nations benefit from global AI governance frameworks?

Developing nations can benefit through capacity-building programs, joint research initiatives, skills training for workers, enhanced data collection for representative training datasets, and regulatory support. Models like Gavi and the Global Fund demonstrate how multilateral partnerships can share technology and resources. The report also highlights the Nagoya Protocol as a model for ensuring fair sharing of benefits from data resources extracted from developing countries.

What are confidence-building measures for military AI applications?

Confidence-building measures for military AI include shared testing and evaluation standards, information exchanges on AI-enabled system deployments, clarifying expected behavior of autonomous systems, communication hotlines between major powers, and voluntary observations of military AI exercises. These measures draw on Cold War precedents like the Moscow-Washington hotline and aim to reduce risks of miscalculation and escalation in AI-enabled warfare.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup

Our SaaS platform, AI Ready Media, transforms complex documents and information into engaging video storytelling to broaden reach and deepen engagement. We spotlight overlooked and unread important documents. All interactions seamlessly integrate with your CRM software.