AI Policy Governance 2025: Key Findings From the Stanford AI Index Report
Table of Contents
- The Global AI Policy Landscape in 2025
- Stanford AI Index 2025: Methodology and Scope
- Global AI Legislation Trends and Statistics
- The EU AI Act: A Landmark Regulatory Framework
- US Federal and State AI Policy Developments
- Deepfake Regulation and Election Integrity
- Government AI Investment and Public Spending
- AI Safety Institutes and International Cooperation
- Country-by-Country AI Policy Comparisons
- The Future of AI Governance: What Comes Next
📌 Key Takeaways
- 204 AI laws passed globally: Since 2016, 39 countries have enacted AI-related legislation, with 40 new laws in 2024 alone — the second-highest year on record.
- US states lead the charge: State-level AI legislation exploded from 49 laws in 2023 to 131 in 2024, while only 4 federal AI bills were passed despite 221 being proposed.
- $250+ billion in government commitments: Countries worldwide announced massive AI investment packages in 2024, from Saudi Arabia’s $100 billion to France’s €109 billion pledge.
- EU AI Act sets the global standard: The world’s first comprehensive AI regulation introduces risk-based rules, with most provisions taking effect in 2026.
- AI safety institutes go global: From 2 institutes in late 2023 to 11+ countries by end of 2024, with a formal international network and $11 million in research funding.
The Global AI Policy Landscape in 2025
Artificial intelligence has moved from the margins of policy debate to the absolute center of government agendas worldwide. The Stanford AI Index Report 2025 — one of the most comprehensive annual assessments of AI progress and governance — paints a vivid picture of how rapidly the AI policy governance 2025 landscape is evolving. From sweeping legislation in the European Union to an explosion of state-level regulation in the United States, policymakers are racing to keep pace with technology that is advancing faster than any regulatory framework can adapt.
The numbers tell a compelling story. In 2016, just one AI-related law was passed anywhere in the world. By 2024, that figure had skyrocketed to 40 — and a total of 204 AI-related laws have now been enacted globally across 39 countries. Mentions of artificial intelligence in legislative proceedings across 75 geographic areas grew by 21.3% in a single year, reaching 1,889 total mentions. These are not abstract statistics; they represent a fundamental shift in how governments view their responsibility to regulate, fund, and shape the development of AI technologies.
For organizations seeking to understand and navigate these regulatory shifts, having access to authoritative source material is essential. Interactive platforms like Libertify’s Interactive Library make complex policy reports accessible and engaging, transforming dense PDFs into experiences that teams can actually explore and discuss.
Stanford AI Index 2025: Methodology and Scope
The Stanford Human-Centered AI Institute (HAI) has published the AI Index Report annually since 2017, making it one of the most trusted and widely cited sources for tracking global AI trends. Chapter 6 of the 2025 edition focuses specifically on policy and governance, covering three major areas: global AI policy events in 2024, AI and policymaking (legislative and regulatory analysis), and public investment in AI.
The methodology is rigorous. The report analyzes legislative records from 114 countries for AI-related legislation passed between 2016 and 2024. It examines legislative proceedings from 75 geographic areas to track how frequently AI appears in parliamentary debates. For public investment analysis, the researchers compiled government contract and grant data from the United States, United Kingdom, and 25 European countries spanning 2013 to 2023.
What makes this chapter particularly valuable is its comparative approach. Rather than focusing on a single jurisdiction, the Stanford AI Index provides a global panorama that reveals which countries are leading in AI governance, where regulatory gaps persist, and how different regions are allocating public resources to artificial intelligence. This breadth of analysis is what distinguishes it from narrower policy reviews and makes it an essential reference for anyone working in AI policy governance in 2025.
Global AI Legislation Trends and Statistics
The acceleration of global AI legislation is perhaps the most striking finding in the Stanford AI Index 2025. From a single AI-related law passed worldwide in 2016, the global count rose to 40 in 2024 — the second-highest year on record after 2022. Cumulatively, 204 AI-related laws have been enacted since 2016, spanning 39 countries and signaling a truly global legislative movement.
The countries leading in cumulative AI legislation may surprise some observers. The United States tops the list with 27 laws, followed by Portugal (20), Russia (20), Belgium (18), and South Korea (13). Spain, Italy, and the United Kingdom round out the top ten. In 2024 specifically, Russia led with 7 new AI laws, while Belgium and Portugal each passed 5.
An interesting correlation emerges when comparing legislative activity with parliamentary discussion. The report found that greater parliamentary discussion of AI generally correlates with more AI legislation, though some countries deviate. Belgium, Portugal, and Russia produced high volumes of legislation relative to fewer mentions in proceedings, suggesting these countries may have taken more executive-driven approaches to AI regulation. Meanwhile, Spain led all countries in AI mentions within legislative proceedings in 2024, with 314 references — more than double the next country, Ireland, with 145.
The ninefold growth in AI mentions across global legislative proceedings since 2016 demonstrates that AI governance is no longer a niche concern. It has become a mainstream policy priority across continents and political systems, from established democracies to developing economies. The Libertify Interactive Library offers a growing collection of reports that track these trends in detail.
Transform complex policy reports into interactive experiences your team will actually read.
The EU AI Act: A Landmark Regulatory Framework
On March 13, 2024, the European Parliament passed the EU AI Act — the world’s first comprehensive AI regulation. This landmark legislation introduces a risk-based framework for governing artificial intelligence, establishing transparency and reporting obligations that will reshape how AI systems are developed, deployed, and monitored across the European Union.
The Act categorizes AI applications by risk level, from minimal risk (no specific obligations) to unacceptable risk (outright bans). Prohibited practices include social scoring systems, AI designed for human manipulation, and biometric categorization using sensitive characteristics. High-risk applications — such as AI used in critical infrastructure, education, law enforcement, and border management — face stringent requirements for documentation, human oversight, and accuracy standards.
To enforce this sweeping legislation, the European Commission established the AI Office on May 28, 2024, staffed with over 140 members across five dedicated units. The office’s mandate includes implementing the Act, enforcing standards for general-purpose AI models, coordinating codes of practice, and applying sanctions for non-compliance. On November 14, 2024, the AI Office released its first draft of the Code of Practice for General-Purpose AI, signaling the beginning of the practical implementation phase.
Most provisions of the EU AI Act take effect in 2026 after a two-year implementation period. The legislation has drawn both praise for its comprehensive approach and criticism from industry leaders who argue it imposes excessive compliance burdens that could stifle innovation. Regardless of perspective, the EU AI Act has established a regulatory benchmark that is influencing AI governance discussions worldwide, much as GDPR did for data privacy.
US Federal and State AI Policy Developments
The United States presents a fascinating study in contrasts when it comes to AI policy governance in 2025. At the federal level, there is an enormous gap between legislative ambition and actual law. In 2024, 221 AI-related bills were proposed in Congress — nearly triple the number from 2022 — yet only 4 were actually passed. This legislative gridlock stands in stark contrast to the explosive activity at the state level.
US states have emerged as the true engines of AI regulation. State-level AI laws more than doubled from 49 in 2023 to 131 in 2024, marking a tectonic shift in the US regulatory landscape. California led the nation with 22 new AI laws in 2024, followed by Utah (12) and Maryland (8). Cumulatively from 2016 to 2024, California has enacted 42 AI-related bills, far ahead of Maryland, Virginia, and Utah, each with 17.
Federal regulatory activity, while not resulting in many new laws, has been substantial through executive action. The Stanford report documents 59 AI-related federal regulations introduced in 2024 — more than double the 25 issued in 2023. These regulations came from 42 unique agencies (up from 21 the previous year), with the Department of Health and Human Services leading at 14 regulations. Notable regulatory actions included restrictions on semiconductor exports to China, limits on AI-driven algorithmic scoring in hiring by the Consumer Financial Protection Bureau, and executive orders on preventing foreign access to bulk sensitive data.
One particularly significant moment came on September 29, 2024, when California Governor Gavin Newsom vetoed SB 1047, a bill that would have mandated safety testing for frontier AI models. Newsom argued the bill imposed excessive standards and could hinder innovation. This veto highlighted the ongoing tension between safety-oriented regulation and innovation-friendly policy that characterizes the American approach to AI governance.
Deepfake Regulation and Election Integrity
Among the most rapidly evolving areas of AI policy governance in 2025 is the regulation of deepfakes, particularly regarding elections and intimate imagery. The Stanford AI Index 2025 reveals a dramatic acceleration in legislative activity on this front, driven by growing public concern about AI-generated misinformation and non-consensual imagery.
Before 2024, only five US states — California, Michigan, Washington, Texas, and Minnesota — had enacted laws regulating deepfakes in the context of elections. In a single year, 15 additional states introduced similar measures, bringing the total to 20 state-level laws addressing election-related deepfakes. This surge was undoubtedly fueled by concerns about the 2024 US presidential election and the demonstrated capability of AI to generate convincing fake audio, video, and images of political figures.
The regulation of intimate imagery deepfakes has been even more extensive. By the end of 2024, 36 state-level laws addressed non-consensual intimate deepfakes, with 25 states enacting protections covering all individuals and 5 additional states providing protection specifically for minors. Only Wyoming and Ohio remained without any form of intimate deepfake regulation, making them notable outliers in an otherwise comprehensive national response.
At the federal level, the Federal Election Commission issued guidance clarifying that the Federal Election Campaign Act is “technology neutral” with respect to AI deepfakes, meaning existing prohibitions on fraudulent campaign communications apply regardless of whether the content was generated by AI. This approach — applying existing legal frameworks to new AI-generated threats rather than creating entirely new legislation — represents one strategy for addressing the pace of technological change.
Share regulatory intelligence with your team through interactive document experiences.
Government AI Investment and Public Spending
Perhaps nowhere is the global commitment to artificial intelligence more visible than in government spending figures. The Stanford AI Index 2025 provides detailed analysis of public AI investment through both contracts and grants, revealing massive and growing financial commitments from governments worldwide.
The United States leads global government AI spending by a wide margin. From 2013 to 2023, the US allocated $5.2 billion across 2,678 AI contracts, dwarfing the United Kingdom ($568 million, 555 contracts), Germany ($278 million, 409 contracts), and France ($190 million, 139 contracts). When factoring in grants, the picture becomes even more dramatic: the US government allocated $19.7 billion in AI-related grants across 18,399 awards over the same period, with annual grant funding growing nearly nineteenfold from $230 million in 2013 to $4.49 billion in 2023.
The investment commitments announced in 2024 reached unprecedented levels. Saudi Arabia unveiled Project Transcendence, a $100 billion AI initiative. Abu Dhabi launched the MGX Fund, also valued at $100 billion. France committed €109 billion to AI infrastructure. Canada announced a CA$2.4 billion AI infrastructure package. India launched the $1.25 billion IndiaAI Mission. China injected $47.5 billion into its third-phase semiconductor fund. Singapore pledged $1 billion over five years. Combined, these announcements represent well over $250 billion in planned government AI spending.
A critical divergence exists between US and European spending priorities. In the United States, the Department of Defense accounts for a staggering 75% of all federal AI contract spending, reflecting the country’s defense-first orientation. In contrast, European countries allocate 64% of their AI spending to general public services, 12.3% to education, and 7.4% to health — with defense receiving just 0.84%. This fundamental difference in priorities shapes not only the types of AI systems being developed but also the broader societal impact of public AI investment.
On a per capita basis, the picture shifts somewhat. While the US leads with $1.58 million per 100,000 inhabitants in AI contracts (2013–2023), Finland ($1.29 million) and Denmark ($1.27 million) are close behind, suggesting that smaller European nations are punching above their weight in AI investment. Europe overall has been closing the gap with the US since 2020, with total European AI investment in 2023 approximately 67 times higher than in 2013 — a far steeper growth trajectory than the US’s fifteenfold increase over the same period.
AI Safety Institutes and International Cooperation
The rapid proliferation of AI safety institutes represents one of the most significant developments in AI policy governance in 2025. What began with just two institutes — in the United States and United Kingdom — following the inaugural AI Safety Summit in November 2023 has expanded into a global network spanning multiple continents.
At the AI Seoul Summit in May 2024, additional AI safety institutes were pledged by Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada, and the European Union. This expansion reflects a growing consensus that AI safety is not a problem any single country can solve alone, and that coordinated international approaches are essential for managing the risks posed by increasingly capable AI systems.
In November 2024, the United States took a further step by launching the International Network of AI Safety Institutes in San Francisco. Initial members include Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, and the United Kingdom. The network secured over $11 million in global research funding commitments — a modest sum relative to the trillions flowing into AI development, but a symbolically important foundation for coordinated safety research.
The United Nations has also stepped into the AI governance arena. On March 21, 2024, the UN General Assembly adopted a US-led resolution on “safe, secure, and trustworthy” AI that was endorsed by all 193 member states, with active support from over 120 states including China. In September, the UN adopted the Global Digital Compact during the Summit of the Future, and in December, the UN Security Council debated AI in military contexts for the first time, with Secretary-General Guterres calling for “international guardrails.”
Country-by-Country AI Policy Comparisons
The Stanford AI Index 2025 enables rich comparisons across countries, revealing distinct national strategies for AI governance. These comparisons illuminate how different political systems, economic priorities, and cultural values shape approaches to AI regulation.
In legislative output, the United States leads cumulatively with 27 AI-related laws, but this figure is distributed across federal and state levels, with most activity occurring at the state level. Portugal and Russia, each with 20 laws, have been surprisingly prolific — particularly Portugal, whose AI legislative activity outpaces its relatively modest profile in global AI development. Belgium (18 laws) and South Korea (13) round out the top five.
Investment patterns reveal different strategic emphases. The US approach is heavily defense-oriented, while European nations emphasize public services and social applications. The Gulf states — with Saudi Arabia and the UAE each committing $100 billion — are positioning themselves as major AI hubs through sheer financial muscle. India’s $1.25 billion IndiaAI Mission focuses on developing domestic capabilities and ensuring AI benefits reach its vast population. China continues to invest heavily in semiconductor self-sufficiency through its $47.5 billion Big Fund.
On AI mentions in legislative proceedings, Spain leads globally with a cumulative 1,200 mentions from 2016 to 2024, followed by the United Kingdom (710) and Ireland (659). In the US Congress specifically, the 118th session (2023–24) recorded 136 AI-related mentions — the highest ever and an 83.8% increase from the previous session. These figures indicate that parliamentary engagement with AI is deepening globally, creating the political foundation for more substantive legislation in the years ahead.
The contrasting approaches are also visible in regulation philosophy. The EU has opted for comprehensive, binding regulation through the AI Act. The US favors a combination of voluntary frameworks (like the NIST AI Risk Management Framework) and targeted regulations from individual agencies. The UK has pursued a pro-innovation, sector-specific approach, while China has implemented targeted rules addressing specific AI applications like generative AI and algorithmic recommendations. If you want to explore how different sectors are adapting to these policy frameworks, check out insights on AI applications across industries in the Libertify Interactive Library.
The Future of AI Governance: What Comes Next
The Stanford AI Index 2025 provides not just a snapshot of current AI policy governance but also a roadmap for where this rapidly evolving field is headed. Several trends emerge from the data that point toward the likely trajectory of AI regulation in the coming years.
First, the gap between AI legislative proposals and enacted laws — particularly visible at the US federal level — suggests that political consensus on AI regulation remains elusive. With 221 bills proposed but only 4 passed in 2024, the challenge is not a lack of policy ideas but rather disagreement on the right approach. This legislative bottleneck is likely to persist, pushing more regulatory authority to executive agencies and state governments.
Second, the explosive growth of state-level AI legislation in the US (from 1 law in 2016 to 131 in 2024) is creating a patchwork of regulations that may eventually force federal action. Companies operating across multiple states face an increasingly complex compliance landscape, which could generate demand for unified federal standards — much as the varied state privacy laws created pressure that led to federal privacy discussions.
Third, the international coordination infrastructure is rapidly maturing. The proliferation of AI safety institutes, the UN’s engagement, and bilateral cooperation agreements suggest that AI governance will increasingly operate at the multilateral level. The challenge will be ensuring that these international frameworks keep pace with AI capabilities that are advancing at an unprecedented rate.
Fourth, government AI spending is likely to continue its dramatic upward trajectory. The $250+ billion in commitments announced in 2024 alone dwarfs previous years, and the strategic competition between nations — particularly the US, China, and the EU — shows no signs of abating. The question is not whether governments will spend more on AI, but whether this investment will be directed toward applications that serve broad public interests or narrower strategic objectives.
Finally, the convergence of AI safety concerns, deepfake regulation, and election integrity issues points toward a future where AI governance is deeply integrated into democratic processes. As AI becomes more capable and more pervasive, the governance frameworks established today will shape whether this technology amplifies or undermines democratic values. The Stanford AI Index makes clear that the stakes could not be higher — and that the window for establishing effective governance is narrowing rapidly.
Make every research report count. Turn PDFs into interactive experiences with Libertify.
Frequently Asked Questions
What are the key findings of the Stanford AI Index 2025 on AI policy governance?
The Stanford AI Index 2025 reveals that 204 AI-related laws have been passed globally since 2016 across 39 countries. In 2024, US states passed 131 AI-related laws (up from 49 in 2023), AI mentions in legislative proceedings grew 21.3%, and 59 federal AI regulations were introduced. The report also documents over $250 billion in announced government AI investments worldwide.
How does the EU AI Act impact global AI regulation in 2025?
The EU AI Act, passed in March 2024, is the world’s first comprehensive AI law. It introduces risk-based regulations, transparency requirements, and bans on social scoring and certain biometric practices. Most provisions take effect in 2026. The European Commission also established an AI Office with over 140 staff members to enforce the Act and coordinate codes of practice for general-purpose AI models.
How much are governments investing in AI in 2025?
Government AI investment is surging globally. The US has allocated $5.2 billion in AI contracts and $19.7 billion in AI-related grants from 2013 to 2023. In 2024, major commitments include Saudi Arabia’s $100 billion Project Transcendence, UAE’s $100 billion MGX Fund, France’s €109 billion AI infrastructure pledge, Canada’s CA$2.4 billion package, and India’s $1.25 billion IndiaAI Mission.
What is the difference between US and European approaches to AI policy?
The US and Europe diverge sharply in AI spending priorities. The US Department of Defense accounts for 75% of federal AI contract spending, reflecting a defense-first approach. In contrast, European AI spending focuses on general public services (64%), education (12.3%), and health (7.4%), with defense receiving only 0.84%. The US favors voluntary frameworks while the EU has enacted comprehensive binding regulation.
How are governments addressing deepfake regulation in 2025?
Deepfake regulation has accelerated dramatically. Before 2024, only 5 US states had laws regulating deepfakes in elections. By the end of 2024, 15 additional states enacted similar measures. For intimate imagery deepfakes, 25 states now have laws covering all individuals, with 36 total state-level laws enacted. Only Wyoming and Ohio lack any intimate deepfake regulation.
What role do AI safety institutes play in global AI governance?
AI safety institutes are becoming a cornerstone of global AI governance. Starting with just the US and UK in November 2023, by the end of 2024, institutes were established or pledged in Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada, and the EU. In November 2024, the US launched the International Network of AI Safety Institutes with over $11 million in global research funding commitments.