Artificial Intelligence and Global Governance: Chatham House Policy Framework for Responsible AI
Table of Contents
- Why AI Global Governance Demands Urgent International Cooperation
- The Technopolar World: How Tech Companies Reshape Global Power
- Building a CERN for AI: International Research Infrastructure
- Council of Europe AI Treaty: First Binding Legal Framework
- Open-Source AI and the Democratization of Technology
- Community-Based AI: Solving the Global Language Crisis
- AI Ethics and Responsible Deployment Frameworks
- Public Sector AI: The British AI Corporation Model
- Geopolitical Rivalry and the Future of AI Regulation
- Multi-Stakeholder Approaches to Responsible AI Governance
📌 Key Takeaways
- Governance gap: Current AI governance is insufficiently incentivized, resourced, coordinated, and representative — new agreements, treaties, and institutions are urgently needed
- CERN for AI: An international research facility modeled on CERN could pool resources, attract public-interest researchers, and reduce dependency on private labs that shape governance of their own technologies
- First binding treaty: The Council of Europe Framework Convention on AI, adopted May 2024, establishes the world’s first legally binding AI treaty with participation from the US, Canada, Japan, and Australia
- Language inequality: ChatGPT 3.5 scored a BLEU score of zero for Zulu translation — community-based AI initiatives in Africa and beyond are building solutions for underrepresented languages
- Technopolar power shift: Tech companies now wield geopolitical influence through control of computing capacity, algorithms, and data, with their terms of service setting norms for billions worldwide
Why AI Global Governance Demands Urgent International Cooperation
Governance of emerging technologies may be one of the defining challenges for international relations in the 21st century. According to a landmark Chatham House report published in June 2024, the rapid advancement of artificial intelligence has created novel governance challenges that existing institutions are fundamentally ill-equipped to handle. The report, a collection of nine expert essays edited by Alex Krasodomski with a foreword by Chatham House Director Bronwen Maddox, argues that competition for technological hegemony promises economic advantage, entrenchment of values and norms, and military edge — making AI governance a geopolitical imperative rather than a mere technical exercise.
The stakes have never been higher. Even skeptics acknowledge that AI will disrupt economies, societies, and security dimensions in ways that extend far deeper into everyday lives than previous technological revolutions. The report’s contributors — spanning experts from Africa, Europe, Latin America, and North America — emphasize that AI governance is usually retroactive, with institutions needing considerable time to adapt while technology advances at exponential speed. Companies often set global standards on fundamental rights, political norms, and social behaviors, filling governance vacuums that national governments and multilateral institutions have been too slow to address.
The current state of AI governance, the report concludes, is “insufficiently incentivized, insufficiently resourced, insufficiently coordinated and insufficiently representative.” This assessment underscores the need for new agreements, treaties, and institutions that can match the pace and complexity of AI-driven transformation across every sector. With half the world going to polls in 2024 and concerns about electoral “post-reality” shaped by AI-enabled misinformation, the urgency for coordinated governance could not be clearer.
The Technopolar World: How Tech Companies Reshape Global Power
At the UK AI Safety Summit in November 2023, Prime Minister Rishi Sunak interviewed Elon Musk — a scene that Chatham House Director Bronwen Maddox highlights as emblematic of a fundamental power shift. Scholars Ian Bremmer and Mustafa Suleyman have argued that we now live in a “technopolar” world where power is wielded through control of computing capacity, algorithms, and data rather than through traditional state instruments. The Chatham House report examines this dynamic extensively, revealing how tech companies have become geopolitical actors in their own right.
The evidence is striking. During the Ukraine conflict in 2022-2023, Starlink provided critical military communication capability — yet this essential wartime infrastructure depended entirely on a private company’s decisions, creating profound uncertainty over who truly controlled strategic outcomes. More broadly, tech companies’ terms of service now set norms on privacy, access to information, and freedom of expression for billions of people worldwide, often with less democratic accountability than the governments they effectively supersede.
The power imbalance between regulators and the regulated has tilted decisively in favor of the latter. Companies like OpenAI, Google DeepMind, and Anthropic dominate technical AI development, creating information asymmetries that make effective regulation extraordinarily difficult. The report warns that the concept of a tech company as entirely separate from the state “is not a reality everywhere” — pointing to India’s frequent internet shutdowns, China’s challenge to Western internet architecture, and the US Biden administration’s export controls that determine where advanced chip manufacturing locates globally.
Building a CERN for AI: International Research Infrastructure
One of the report’s most ambitious proposals is the creation of a “CERN for AI” — an international research facility modeled on the European Organization for Nuclear Research. Elliot Jones, the essay’s author, argues that such an institution could advance AI safety research beyond the capacity of any single firm or nation, grounded in scientific openness and pluralist human values aligned with the UN 2030 Agenda.
The original CERN, founded in 1954 under the UNESCO umbrella with 23 member states, provides a compelling template. It delivered the Higgs boson discovery and gave birth to the World Wide Web — achievements that demonstrate how publicly funded, internationally governed research can produce transformational breakthroughs. A CERN for AI would require physical infrastructure including data centers and high-performance computing, social and organizational infrastructure for research collaboration, and — critically — privileged structured access to state-of-the-art AI models and their underlying training datasets.
The potential benefits are substantial. Such a facility could provide vital global public goods including safety benchmarks, auditing tools, and datasets for bias assessment. It could attract researchers pursuing public benefit over commercial potential — what Professor Holger Hoos describes as a “beacon” for talent. The UK AI Safety Institute has already demonstrated this pull, attracting senior staff from OpenAI and Google DeepMind. However, the report acknowledges significant hurdles: rising geopolitical tensions between the US and China make international institutions harder to establish than in the post-WWII era, and there is a genuine risk of capture by the very big tech firms the institution seeks to counterbalance.
Transform complex AI governance reports into engaging interactive experiences your team will actually read.
Council of Europe AI Treaty: First Binding Legal Framework
The Council of Europe Framework Convention on Artificial Intelligence, adopted on May 17, 2024, in Strasbourg, represents a watershed moment in AI governance. As Thomas Schneider details in his essay, negotiations were concluded in just 19 months by the Committee on AI (CAI), producing the world’s first legally binding international treaty on artificial intelligence. Remarkably, the negotiations included non-European states: Argentina, Australia, Canada, Costa Rica, Israel, Japan, Mexico, Peru, the United States, and Uruguay all participated.
Schneider draws an illuminating historical parallel. Just as Kaiser Wilhelm II declared “the car has no future, I believe in the horse,” today’s reactions to AI range from euphoria to panic. The treaty’s approach recognizes that AI governance must be “just as dynamic and agile as the technology itself.” Rather than imposing rigid regulations, it establishes a flexible framework that can be supplemented through further norms, with regulatory “updates” or “releases” published like software — and potentially even using AI systems themselves to develop regulatory frameworks that cope with AI’s pace of change.
The Convention establishes legally binding obligations around human rights, democracy, and rule of law in AI deployment, using a graduated and differentiated approach proportionate to context-specific risks. It includes periodic reporting mechanisms and follow-up mechanisms for cooperation with non-ratified states. This flexibility is deliberate: a more rigid European-style regulatory framework would limit global adoption. The Council of Europe’s track record provides confidence — its Convention on Cybercrime (2001) now involves approximately 100 cooperating states, and Convention 108 on data protection has become a binding standard for democratic states worldwide.
Open-Source AI and the Democratization of Technology
The tension between open-source and proprietary approaches to AI development represents one of the most consequential debates in technology governance. The Chatham House report documents how early AI was largely built on open-source principles, with shared research, publicly available datasets, and collaborative development across academic institutions worldwide. However, as commercial potential became apparent, major players increasingly moved to closed, proprietary approaches — a shift that critics argue protects market share more than it improves safety.
The BLOOM project stands as a powerful counterexample. Launched in 2022, BLOOM is a large language model covering 46 languages, built by over 1,000 researchers using “justly sourced” data through an open-source methodology. It demonstrates that transparent, community-driven AI development can produce powerful systems while maintaining ethical standards around data sourcing and cultural representation. Projects like these challenge the narrative that only well-funded private labs can produce frontier AI systems and raise important questions about whether responsible innovation requires proprietary control.
The implications for governance are profound. If AI development remains concentrated among a handful of companies, regulatory frameworks will inevitably be shaped by those companies’ interests and worldviews. Open-source alternatives provide not only technological diversity but governance diversity — ensuring that multiple approaches to safety, ethics, and deployment can be tested, compared, and refined through transparent community processes rather than behind corporate walls.
Community-Based AI: Solving the Global Language Crisis
Perhaps the most sobering revelation in the Chatham House report concerns AI’s language crisis. Kathleen Siminyu’s essay demonstrates that global AI tools are fundamentally broken for the majority of the world’s languages. ChatGPT is far more useful to English speakers than anyone else, and for at least 15 poorly represented languages, training data is totally deficient. Non-English communities face a higher likelihood that AI will produce gibberish that merely resembles their language, with minimal factual knowledge of local contexts.
The data is devastating. When South African AI company Lelapa AI evaluated ChatGPT’s performance in Zulu — one of South Africa’s most widely spoken languages — ChatGPT 3.5 achieved a BLEU score of literally zero for machine translation. By contrast, native-language-team LLMs performed significantly better at named entity recognition and translation tasks. This finding underscores what the report calls “the huge value of context-specific AI work” and reveals that global AI tools are “still not achieving the accuracy of low-resource and Africa-centric language models, on simple tasks.”
The risks extend beyond mere inconvenience. The globalization of English-centric AI could facilitate what the report terms “colonial AI” — an insistence on English with little regard for local cultures and languages. International AI conferences are typically held in the Global North, with low African attendance due to distance and costs. Data annotation is usually outsourced and poorly paid, with workers exposed to toxic content affecting their mental health. Yet grassroots movements are pushing back: organizations like Masakhane Research Foundation, GhanaNLP, EthioNLP, and HausaNLP are building AI tools that serve African languages and communities. When the prestigious ICLR conference was held in Kigali, Rwanda in 2023, African attendance grew by over 1,000 percent — demonstrating the transformative impact of geographic accessibility.
Make policy research accessible across languages and cultures with Libertify’s interactive document platform.
AI Ethics and Responsible Deployment Frameworks
The Chatham House report makes a crucial distinction that many governance frameworks overlook: a movie recommendation algorithm and a parole decision system require fundamentally different safeguards. This context-dependent approach to AI ethics recognizes that risk assessment for the United Kingdom should not mirror assessments for Bangladesh or Kenya, and that one-size-fits-all governance frameworks will inevitably fail to protect the most vulnerable populations.
AI “hallucinations” — where systems generate plausible but entirely fabricated information — represent a particularly challenging ethical frontier. The report notes that these hallucinations challenge our ability to distinguish fiction from reality, with potentially catastrophic consequences in domains like healthcare, legal proceedings, and democratic discourse. Combined with biased datasets that make automated decisions unreliable, the current generation of AI systems requires robust ethical guardrails that most deployment contexts simply do not have.
The Kaitiakitanga Licence developed by Te Hiku Media in New Zealand offers an innovative model for ethical AI governance. Te Hiku Media, a collectively owned charitable media organization, gathered data for Te Reo — the Māori language at risk of extinction following British colonial policies. Recognizing the risk that foreign tech companies could develop products from this data and sell them back to Māori communities, Te Hiku developed a licence ensuring that access aligns with Māori customs, protocols, and values. This approach demonstrates how communities can assert digital self-determination within AI governance frameworks.
Public Sector AI: The British AI Corporation Model
Among the report’s most creative proposals is the concept of a “British AI Corporation” (BAIC), modeled on the BBC’s founding in 1922 as a response to radio — then a revolutionary technology. Just as the BBC was established to ensure that broadcasting served the public interest rather than solely commercial imperatives, a BAIC would develop AI of genuine public utility, operating under a charter with a usage-based financial model that ensures sustainable independence from both government and corporate pressures.
The BAIC model addresses several critical governance gaps. It would pay for training datasets rather than “scraping” the internet — a practice that has generated significant legal and ethical controversies across the AI industry. By establishing a public sector alternative to commercially driven AI development, it could ensure that AI systems are designed with public benefit as their primary objective rather than profit maximization. The model could extend to many countries, each establishing their own public AI corporations adapted to local needs, languages, and cultural contexts.
This proposal directly confronts the report’s broader concern that liberal democracies have retreated into “comfortable roles as regulators and rule-makers” while ceding the actual development of AI technology to private companies and, increasingly, to autocratic and authoritarian states. Without state-backed AI development capacity, democratic nations risk finding themselves unable to demand that technology meets their normative standards. The BAIC concept represents one pathway toward ensuring that democratic governance and AI development can reinforce rather than undermine each other.
Geopolitical Rivalry and the Future of AI Regulation
The geopolitical dimension of AI governance permeates every essay in the Chatham House report. China and the United States are the most significant AI investors globally, locked in an increasingly tense technological rivalry that the report characterizes as a “new cold war.” How AI will transform the world is fundamentally a geopolitical question: AI shaped in a cutthroat marketplace produces different outcomes than AI shaped by monopoly power, just as AI led by universities differs from AI led by states, militaries, philanthropies, or tech companies.
The EU AI Act, while groundbreaking as comprehensive product safety legislation for artificial intelligence, illustrates the limitations of regional approaches. Intended for the European single market, it lacks legal force elsewhere and reflects a distinctively European approach that is at odds with both US and Chinese governance philosophies. This fragmentation creates regulatory arbitrage opportunities and complicates the development of global standards that the interconnected nature of AI technology demands.
Yet the report identifies promising developments. National AI safety institutes have been established in the UK, US, Japan, and Canada, attracting new talent and creating institutional capacity for AI governance. The Council of Europe AI treaty’s inclusion of non-European negotiating partners suggests that international cooperation on AI governance remains possible despite geopolitical tensions. The Deep Learning Indaba’s leishmaniasis drug discovery challenge — where 350 community volunteers identified promising treatments for a neglected tropical disease — demonstrates how AI governance can advance both technological progress and human welfare when structured around genuine global participation.
Multi-Stakeholder Approaches to Responsible AI Governance
The overarching conclusion of the Chatham House report is that responsible AI governance cannot occur in silos. Every essay points toward the necessity of multi-stakeholder approaches that bring together governments, tech companies, civil society, academia, and affected communities in genuine dialogue. The Council of Europe treaty negotiations exemplified this principle, with observers from civil society, academia, business, and technical communities all contributing to the final text.
The report identifies several best practice elements for future governance frameworks: interdisciplinary multi-stakeholder processes that ensure diverse perspectives; sector-specific and application-specific regulatory priorities that avoid one-size-fits-all approaches; and dynamic, agile legislative processes that can keep pace with technological change. The concept of regulatory “releases” — governance updates published like software — reflects a fundamental rethinking of how legal frameworks must evolve to remain relevant in an era of rapid AI advancement.
Twenty years of digital technology have proven that power through technology rarely maps to geographies, markets, or existing rules. Access remains wildly uneven across internet connectivity, digital infrastructure investment, and mobile services. The race for market share by US and Chinese firms carries hard lessons about the often questionable effectiveness of regulation in anticipating future developments. Yet the report remains cautiously optimistic, pointing to the growing ecosystem of AI safety institutes, community-based AI initiatives, and international legal frameworks as evidence that coordinated global governance of artificial intelligence is not only necessary but achievable — if nations, companies, and communities can summon the political will to act before it is too late.
Turn lengthy governance reports into interactive experiences that drive engagement and understanding across your organization.
Frequently Asked Questions
What is the Chatham House report on AI and global governance about?
The Chatham House report, published in June 2024, is a collection of nine essays examining how artificial intelligence creates novel governance challenges that existing institutions are not equipped to handle. It covers proposals including a CERN for AI, the Council of Europe AI treaty, open-source democratization, community-based AI in Africa, ethical frameworks, and the growing power imbalance between tech companies and governments.
What is the CERN for AI proposal?
The CERN for AI proposal envisions an international AI research facility modeled on CERN (the European Organization for Nuclear Research). It would pool resources beyond individual countries, support very large computing infrastructure, stimulate public sector funding, insulate research from national political agendas, and redistribute talent from private AI labs to reduce the moral hazard of private firms shaping governance of the technologies they create.
How does the Council of Europe AI treaty work?
The Council of Europe Framework Convention on AI, adopted on May 17, 2024, is the first legally binding international treaty on artificial intelligence. It establishes obligations around human rights, democracy, and rule of law in AI deployment. The treaty uses a flexible framework format allowing continuous development, with regulatory updates published like software releases. Non-European countries including the US, Canada, Japan, and Australia participated in negotiations.
Why is community-based AI important for global governance?
Community-based AI is critical because global AI tools predominantly serve English speakers and work poorly in other languages. For example, ChatGPT 3.5 scored a BLEU score of zero when translating Zulu. Community-driven initiatives like Masakhane, GhanaNLP, and Te Hiku Media create AI tools that serve local languages and cultural contexts, providing a route to digital self-determination and ensuring AI benefits reach underrepresented populations.
What role does open-source AI play in responsible governance?
Open-source AI plays a vital democratization role by making AI technology accessible beyond a handful of large corporations. Projects like BLOOM, a large language model covering 46 languages built by over 1,000 researchers using justly sourced data, demonstrate that transparent, community-driven AI development can produce powerful systems while maintaining ethical standards. However, big players have increasingly moved to closed proprietary approaches, threatening the open-source ecosystem.
How are tech companies influencing AI governance globally?
Tech companies wield enormous influence over AI governance through what scholars call a technopolar world, where power derives from control of computing capacity, algorithms, and data. Companies like OpenAI, Google DeepMind, and Anthropic dominate technical development, creating a significant power imbalance where regulators depend on the regulated entities for technical understanding. Their terms of service effectively set norms on privacy, access to information, and freedom of expression for billions worldwide.