Geopolitics of AI: European Governance and Power

📌 Key Takeaways

  • EU AI Act leads globally: Europe’s risk-based AI regulation is the world’s first comprehensive framework, with fines up to €35 million or 7% of global turnover for violations.
  • Massive investment gap: US private AI investment reached $47.4 billion in 2022, while the EU invested only $14-17 billion, creating a significant competitive disadvantage.
  • Technopolitics matters: AI systems are not neutral technologies but social constructs shaped by power structures, narratives, and economic interests of their creators.
  • Arms race risk: The mere perception of an AI arms race may push governments and corporations to cut corners on safety research and responsible AI practices.
  • Fragmented governance: Over 60 countries now have national AI strategies, creating a crowded regime complex that complicates coordinated global AI governance.

Understanding AI Technopolitics and Geopolitical Power

Artificial intelligence has become one of the most consequential technologies shaping the global order in the 21st century. As Carnegie Europe Fellow Raluca Csernatoni argues in her landmark analysis, understanding the geopolitics of AI requires moving beyond simplistic narratives of technological competition. Instead, we must examine the concept of technopolitics — the complex interplay between AI technology, geopolitical structures, power dynamics, and the narratives that shape policy decisions across nations.

At its core, technopolitics recognizes that AI systems are not neutral instruments. They are social constructs shaped by the discourses, values, economic interests, and power structures of their creators, funders, deployers, and users. The creation, benefits, and control of AI are unevenly distributed across geographies and populations. This insight is critical for understanding why different nations pursue radically different approaches to AI governance and regulation.

The European Union, the United States, and China each bring distinct philosophical and strategic frameworks to AI governance. Where the US emphasizes market-led innovation and voluntary frameworks, and China pursues state-directed technological supremacy integrated with surveillance capabilities, Europe has positioned itself as a champion of human-centric AI governed by democratic values, fundamental rights, and the rule of law. This positioning, however, comes with significant trade-offs and challenges that Csernatoni’s research illuminates with compelling detail.

Narratives of AI Power Shaping Global Policy

Two dominant narrative categories drive global AI policy decisions, and understanding them is essential for anyone seeking to navigate the geopolitics of artificial intelligence. The first is what Csernatoni calls narratives of AI power, which frame AI as a crucial instrument of statecraft, power projection, and economic prowess. These narratives suggest that mastery of AI will determine the rise and fall of nations, directly influencing funding decisions, regulatory approaches, and expectations about AI’s role in national security.

The second category consists of narratives of AI disruption, which frame artificial intelligence as a technology capable of inducing paradigm shifts across every domain of human activity. These range from utopian visions of unprecedented economic growth and scientific breakthroughs to dystopian fears of mass unemployment, autonomous weapons, and even existential threats to humanity. As Carnegie’s Matt O’Shaughnessy has warned, the “hype over AI superintelligence could lead policy astray,” distracting attention from more pressing risks, diverting resources, and shaping policy actions that primarily serve the interests of powerful technology companies.

These narratives function as performative acts — they do not merely describe potential futures but actively shape the conditions that make certain futures more likely. When European Commission President Ursula von der Leyen declared in her 2023 State of the Union address that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” this framing directly influenced how European institutions allocated resources and prioritized regulation. Understanding who benefits from specific AI narratives is therefore crucial for evaluating any governance proposal.

The EU AI Act: A Landmark in AI Regulation

The EU AI Act represents the most ambitious attempt by any jurisdiction to create comprehensive, binding rules for artificial intelligence systems. First proposed by the European Commission in April 2021, the regulation establishes a risk-based framework with four tiers that determine the obligations placed on AI developers and deployers. This approach — regulating applications rather than the underlying technology — has been hailed as both pragmatic and innovative by EU digital policy experts.

At the highest tier, the AI Act outright bans certain practices deemed incompatible with EU fundamental rights. These include biometric categorization systems using sensitive characteristics, untargeted scraping of facial images from the internet or surveillance footage, emotion recognition systems in workplaces and educational institutions, social scoring by governments, and AI systems designed to manipulate human behavior or exploit vulnerabilities. Violations of these bans carry the most severe penalties — up to €35 million or 7% of global annual turnover, whichever is higher.

High-risk AI systems — those used in critical infrastructure, education, employment, law enforcement, and border management — must meet stringent requirements including risk management, data quality standards, technical documentation, human oversight provisions, and accuracy benchmarks. The Act also introduces transparency obligations for general-purpose AI models, requiring providers to disclose technical documentation, copyright compliance measures, and summaries of training data content. This layered approach demonstrates how regulatory innovation can complement technological innovation.

However, the Act’s journey through the legislative process revealed deep tensions within Europe. France, home to prominent AI startups Mistral AI, LightOn, and Hugging Face, initially pushed back against provisions it feared would hamper European AI companies. Germany and Italy also raised concerns before ultimately supporting the final text. Civil society organizations have warned that eleventh-hour compromises weakened biometric surveillance restrictions, particularly regarding law enforcement exceptions for real-time biometric identification in public spaces.

Discover how AI governance frameworks are reshaping global policy — explore interactive analyses of leading research reports.

Try It Free →

Europe’s AI Investment Gap and Strategic Autonomy

One of the most striking findings in Csernatoni’s analysis is the enormous gap between Europe’s AI governance ambitions and its investment reality. In 2022, US private investment in artificial intelligence reached a staggering $47.4 billion, while China invested approximately $13.4 billion. The European Union, by contrast, invested between $14 and $17 billion in 2020, with a target of reaching €20 billion annually by 2030 — a figure that still falls far short of US spending levels.

The data becomes even more concerning when examining the venture capital landscape. Between 2015 and 2022, the United States captured 40% of global AI venture capital and private equity, while Europe secured just 12%. Asia, including China, attracted 32%. The US also produced almost twice as many newly funded AI companies as the EU and UK combined, and 3.4 times more than China. This investment disparity has profound implications for Europe’s ability to develop indigenous AI capabilities and maintain technological sovereignty.

European funding mechanisms, while growing, remain modest in comparison. The Horizon 2020 program allocated €1.5 billion to AI between 2018 and 2020. The Digital Europe Programme budgeted €2.5 billion for AI from 2021 to 2027. The European Investment Bank launched a €150 million AI investment facility in 2020, supplemented by a €100 million European Investment Fund pilot for AI and blockchain. While these figures represent meaningful commitments, they highlight a structural challenge: Europe’s per-capita AI R&D spending trails the United States by a factor of 2.7.

This investment gap creates a fundamental tension in European AI strategy. Von der Leyen declared in 2019 that Europe must have “mastery and ownership of key technologies,” including artificial intelligence. Yet without matching this rhetoric with commensurate funding, Europe risks becoming primarily a regulatory power in AI — setting rules for technologies largely developed elsewhere. Several EU member states have recognized this challenge, with Belgium, Finland, the Netherlands, Portugal, and Slovakia issuing a joint nonpaper advocating for “open strategic autonomy” that balances protectionist impulses with the need for international collaboration.

Transatlantic AI Governance and US-EU Relations

The transatlantic relationship remains the most important bilateral axis for AI governance, yet it is marked by both deep alignment and significant friction. Through institutions like the EU-US Trade and Technology Council (TTC) and the Global Partnership on AI (GPAI), the two blocs have demonstrated a shared commitment to responsible and trustworthy AI development. Both sides endorse principles of transparency, accountability, and human oversight — values that contrast sharply with China’s approach to AI governance.

However, beneath this surface alignment lie substantial differences. The United States relies primarily on executive orders, voluntary frameworks, and guidance through institutions like NIST’s AI Risk Management Framework. The EU, by contrast, has pursued binding hard law through the AI Act. This regulatory divergence creates compliance challenges for companies operating across both jurisdictions and raises questions about interoperability between different governance models.

European policymakers are also increasingly concerned about US dominance in AI research and development conducted within Europe. American technology companies employ significant numbers of European AI researchers and have established major research laboratories across EU member states. While this brings investment and jobs, it raises questions about whether European AI talent is primarily serving American strategic interests. The European economic security strategy proposed in June 2023 explicitly addressed these concerns, seeking to balance openness to foreign investment with protection of critical capabilities.

Tensions also emerged within the Council of Europe negotiations on an AI framework convention. Reports indicated that the European Commission was preparing to push back against US-led attempts to exempt the private sector from binding obligations — a position that would significantly weaken the convention’s effectiveness. These institutional disagreements reveal that transatlantic AI cooperation, while essential, requires constant negotiation and cannot be taken for granted.

The US-China AI Rivalry and European Positioning

The US-China AI rivalry represents the dominant geopolitical axis around which much of the global AI governance debate revolves. This competition extends far beyond technology into norms, standards, supply chains, economic security, and ideological narratives about the role of AI in society. For Europe, navigating this rivalry without being forced into a simplistic binary choice has become one of the most consequential foreign policy challenges of the decade.

Critically, Csernatoni notes that Europeans do not share the same urgency as Americans regarding the perceived threat from China. While Washington frames AI competition with Beijing through a national security lens that demands rapid technological advancement, European capitals tend to view AI primarily through economic, social, and regulatory dimensions. French President Emmanuel Macron’s repeated calls for European strategic autonomy reflect a broader continental desire to chart an independent course rather than simply align with American priorities.

The challenge of engaging China in global AI governance presents a genuine dilemma. Any governance framework that excludes the world’s second-largest AI power yields only marginal results. Yet including China in governance discussions inevitably reshapes the agenda, particularly regarding human rights standards and surveillance applications. China has been simultaneously a regulatory pioneer — publishing governance principles for new-generation AI as early as 2019 — and a practitioner of AI-powered mass surveillance that contradicts European fundamental rights frameworks.

Research collaborations further complicate the picture. US-China AI research partnerships quadrupled between 2010 and 2021, demonstrating that academic and scientific engagement continues even as geopolitical tensions escalate. This paradox — deep scientific interconnection alongside strategic rivalry — makes simplistic decoupling narratives unrealistic and highlights the need for nuanced governance approaches that Europe is uniquely positioned to develop.

Transform complex policy reports into engaging interactive experiences your audience will actually explore.

Get Started →

Global AI Governance Regime Complex

The proliferation of AI governance initiatives has created what scholars describe as a regime complex — a fragmented network of overlapping international agreements, institutions, and normative frameworks. By 2023, more than 60 countries had adopted national AI strategies, and 37 AI-related bills were passed globally in 2022 alone. This regulatory explosion reflects both the urgency of AI governance and the difficulty of achieving coordinated international action.

Key pillars of this emerging regime complex include the OECD AI Principles, which have been adopted by over 40 countries and provide a foundation for responsible AI development. The G7 Hiroshima AI Process established international guiding principles and a code of conduct for AI systems. The UNESCO Recommendation on the Ethics of AI represents the broadest multilateral agreement, while the Council of Europe’s Framework Convention on AI aims to be the first legally binding international treaty on artificial intelligence.

The United Kingdom carved out a distinctive role by hosting the AI Safety Summit at Bletchley Park in November 2023, which produced the Bletchley Declaration signed by 28 governments and leading AI companies. This event was significant not only for its substance but for signaling the UK’s post-Brexit ambition to serve as a bridge between American innovation-first approaches and European regulatory frameworks. The subsequent announcement of the Large AI Grand Challenge further demonstrated the UK’s intent to position itself as a global AI governance hub.

For the EU, navigating this crowded governance landscape presents both opportunities and risks. Europe’s regulatory expertise and established institutional frameworks provide advantages in shaping international norms. However, the proliferation of governance forums also risks diluting EU influence and creating contradictions between different frameworks to which member states are simultaneously committed. The establishment of the European AI Office within the Commission represents an attempt to centralize EU coordination, but its effectiveness will depend on securing adequate resources and maintaining coherence across the bloc’s 27 member states.

Corporate Concentration and AI Power Dynamics

One of the most significant shifts documented in Csernatoni’s analysis is the dramatic transfer of AI development from academia to the private sector. Stanford University’s 2022 AI Index revealed a striking statistic: industry produced 32 significant machine learning models that year, compared to just 3 from academia. This represents a fundamental rebalancing of AI power, with profound implications for governance, accountability, and public interest.

The concentration of AI capabilities within a small number of technology corporations — primarily American — creates new challenges for democratic governance. Companies like OpenAI, Google DeepMind, Microsoft, Anthropic, and Meta control the infrastructure, data, computational resources, and talent necessary for cutting-edge AI development. These companies have become independent actors in world politics, wielding influence that rivals or exceeds that of many nation-states.

The launch of ChatGPT in November 2022 dramatically accelerated public awareness of AI capabilities and triggered a scramble among governments to develop regulatory responses. Microsoft researchers’ claim that GPT-4 showed “sparks” of artificial general intelligence — a characterization disputed by many AI scientists — further fueled both investment and anxiety. This corporate-driven narrative cycle, where each capability announcement generates both excitement and regulatory urgency, demonstrates how private sector actors shape the governance agenda in ways that often serve their commercial interests.

For European policymakers, corporate concentration in AI poses a dual challenge. On one hand, regulating American technology giants requires significant institutional capacity and political will. On the other, nurturing European AI companies — exemplified by French startups like Mistral AI and Hugging Face — demands a regulatory environment that does not inadvertently advantage established players through compliance costs that smaller firms cannot bear. The EU AI Act attempts to balance these competing demands, but its effectiveness in preventing further corporate concentration remains to be tested.

Military AI and the Arms Race Perception

The military dimension of AI competition adds urgency and complexity to governance debates. Ukraine has emerged as what Time magazine described as an “AI war lab” — a testing ground for military AI technologies ranging from autonomous drones to intelligence analysis systems. Companies like Palantir and Clearview AI have deployed their technologies in the conflict, providing unprecedented real-world validation of military AI applications.

The perception of an AI arms race between major powers carries dangers that Csernatoni’s analysis underscores with particular clarity. As the paper notes, “whoever is winning the AI arms race is not the key issue. Rather, the mere perception of an arms race may push governments and tech giants to eschew trustworthy and responsible AI and cut corners in safety research and regulation.” This insight highlights a paradox at the heart of AI governance: the very urgency that motivates governance efforts also creates pressure to weaken the safeguards being put in place.

The US Department of Defense launched a generative AI task force in August 2023, signaling American intent to integrate frontier AI capabilities into military operations. The European Defence Fund has supported emerging and disruptive technology development, though Europe’s approach to military AI remains more cautious than Washington’s. Von der Leyen herself has acknowledged the dual-use nature of AI, noting that it “is a general technology that is accessible, powerful and adaptable for a vast range of uses — both civilian and military.”

The arms race framing also represents what Csernatoni calls a “failure of imagination” — a reliance on age-old international relations tropes of great power competition and existential threats that may not adequately capture the novel challenges posed by AI. Moving beyond these narratives toward more nuanced governance frameworks that address actual, documented harms rather than speculative scenarios remains one of the most important tasks facing policymakers on both sides of the Atlantic.

Future of European AI Governance and Policy

Looking ahead, European AI governance faces a series of critical challenges that will determine whether the continent’s regulatory leadership translates into genuine influence over the global AI landscape. The most immediate challenge is the successful operationalization of the EU AI Act across 27 member states with varying levels of institutional capacity, technical expertise, and political commitment to enforcement.

The Act’s implementation requires establishing effective national market surveillance authorities, building technical competence to evaluate AI systems, and developing consistent interpretive guidance that prevents fragmentation across member states. The new European AI Office must coordinate these efforts while also engaging with international partners and monitoring a rapidly evolving technological landscape. Given that full enforcement of the AI Act is not expected until approximately 2026, there is a significant window during which implementation capacity must be built.

Beyond the Act itself, Europe must develop a more coherent common foreign policy on AI — a capacity that currently remains underdeveloped. As the regime complex of international governance forums continues to grow, the EU needs coordinated positions that reflect both the diversity of member state interests and the bloc’s collective aspiration to shape global norms. Diversifying strategic partnerships beyond the traditional transatlantic axis — engaging more deeply with India, Japan, South Korea, and other democratic technology powers — will be essential for amplifying European influence.

Perhaps most fundamentally, Europe must resolve the tension between its regulatory ambitions and its investment reality. The question of whether innovation and regulation are complementary or conflicting — what Csernatoni calls a “misleading question” — will be answered not by theoretical arguments but by practical outcomes. If European AI companies can thrive within the AI Act’s framework, demonstrating that responsible innovation is commercially viable, the Brussels effect may indeed establish a global standard. If, however, compliance costs drive innovation elsewhere, Europe risks becoming a rule-maker for technologies it does not develop — a scenario that would undermine both its economic competitiveness and its geopolitical influence.

The inclusion of civil society in governance processes remains another critical priority. Despite the AI Act’s extensive consultation process, persistent barriers — including limited time, financial resources, and technical expertise — continue to exclude many stakeholders whose perspectives are essential for ensuring that AI governance serves broad public interests rather than narrow commercial ones. Translating the Act’s complex legal provisions into accessible language and creating meaningful participation mechanisms will determine whether European AI governance lives up to its human-centric aspirations.

Make policy research accessible and engaging — turn any report into an interactive experience in minutes.

Start Now →

Frequently Asked Questions

What is the EU AI Act and why does it matter for global AI governance?

The EU AI Act is the world’s first comprehensive, horizontal, risk-based regulation of artificial intelligence systems. It establishes four risk tiers, bans certain AI practices like social scoring, and imposes fines up to €35 million or 7% of global turnover. It matters because it could set a global regulatory standard through the Brussels effect, influencing how AI is governed worldwide.

How does the geopolitics of AI affect European technology policy?

Geopolitical competition between the US, China, and EU shapes European AI policy by creating pressure to balance innovation with regulation. The EU invests far less than the US ($47.4 billion) and China ($13.4 billion) in AI, pushing Europe toward regulatory leadership rather than technological dominance, while narratives of AI power drive funding and strategic decisions.

What is technopolitics in the context of artificial intelligence?

Technopolitics refers to the complex interplay between AI technology and geopolitical structures, power dynamics, narratives, norms, and economic influences. It recognizes that AI systems are not neutral tools but social constructs shaped by the values, interests, and power structures of their creators, funders, deployers, and users.

How does the US-China AI rivalry impact European AI governance?

The US-China AI rivalry pressures Europe to choose sides while pursuing strategic autonomy. US private AI investment reached $47.4 billion in 2022 compared to the EU’s $14-17 billion. This disparity forces Europe to rely on regulatory power rather than technological competition, while navigating transatlantic partnerships through forums like the EU-US Trade and Technology Council.

What are the main challenges facing global AI governance frameworks?

Key challenges include a fragmented regime complex of overlapping governance frameworks, the tension between innovation and regulation, corporate concentration in AI development shifting power from academia to industry, the difficulty of including China in governance without undermining human rights standards, and the risk that arms race narratives push governments to cut corners on AI safety.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup

Our SaaS platform, AI Ready Media, transforms complex documents and information into engaging video storytelling to broaden reach and deepen engagement. We spotlight overlooked and unread important documents. All interactions seamlessly integrate with your CRM software.