—
0:00
Governing with AI: OECD Government AI Adoption Guide 2025
Table of Contents
- The State of Government AI Adoption in 2025
- What 200 AI Use Cases Reveal About Public Sector AI
- AI Benefits for Government: Beyond Routine Automation
- The Pilot Trap: Why Government AI Struggles to Scale
- Risks and Challenges of AI in Public Services
- Seven Enablers for Trustworthy Government AI
- Guardrails That Enable Rather Than Block Innovation
- Stakeholder Engagement and International Cooperation
- Future-Proofing Government AI Strategy
📌 Key Takeaways
- Service automation dominates: 57% of 200 government AI use cases focus on automating, streamlining, or tailoring public services and processes
- Analytical surprise: Contrary to expectations, government AI serves analytical tasks more than routine automation, challenging common narratives about AI replacing clerical work
- Investment gap: Only 15% of governments had an AI investments framework in 2023, creating a vicious cycle of underinvestment and stalled pilots
- Proportionate guardrails: Overly rigid guardrails cause as much harm as insufficient ones—the OECD advocates context-appropriate, risk-based governance
- Seven enablers: Governance, data, digital infrastructure, skills, investment, procurement, and partnerships form the foundational framework for effective government AI
The State of Government AI Adoption in 2025
The OECD’s September 2025 report, “Governing with Artificial Intelligence: The State of Play and Way Forward in Core Government Functions,” delivers the most comprehensive empirical assessment yet of how governments worldwide are adopting artificial intelligence. Drawing on analysis of 200 AI use cases collected across OECD and partner countries, the report maps not just what governments are doing with AI but—perhaps more importantly—what they are failing to do, and why.
The findings challenge comfortable assumptions on multiple fronts. Government AI adoption has begun relatively recently compared to the private sector, and the gap is growing rather than narrowing. Most European Union AI use cases remain in the pilot or development phase, far from full operational deployment. National AI strategies are proliferating, yet a persistent lack of concrete, actionable implementation guidance prevents these strategies from translating into real-world impact. The result is a landscape characterised by ambition without execution, experimentation without scale, and investment without measurable returns.
For policymakers, public sector technology leaders, and citizens concerned about the quality and efficiency of government services, this report provides both a diagnostic framework and a practical roadmap. The OECD’s analysis goes beyond cataloguing applications to identify the structural barriers that prevent governments from realising AI’s potential—and the specific interventions needed to overcome them. Understanding these dynamics is essential for anyone engaged in public sector modernisation, digital government strategy, or the broader governance of AI technology. The parallel challenge in financial regulation is explored in our analysis of the FSB’s framework for monitoring AI in the financial sector.
What 200 AI Use Cases Reveal About Public Sector AI
The empirical backbone of the OECD report is its analysis of 200 AI use cases across government functions. This dataset—collected in 2025 from OECD and partner countries—provides the statistical foundation for every recommendation and observation in the report. The use cases span a wide range of government activities, from public service delivery to policy evaluation, from civic participation to justice administration.
The distribution across government functions reveals significant unevenness. Areas with high transaction volumes and direct citizen interaction—public services, civic participation, and justice—show higher AI adoption rates. By contrast, functions like policy evaluation, tax administration, and civil service management show markedly lower adoption, suggesting that AI’s penetration into the analytical and strategic layers of government remains nascent despite the technology’s obvious applicability.
The maturity data from the European Commission’s Joint Research Centre reinforces this picture. Tracked across five stages—planned, in development, pilot, implemented, and not in use—the data shows that the majority of EU government AI use cases are concentrated in the pilot or development phase. Full implementation remains the exception rather than the rule. This “pilot trap” has significant implications: it means governments are absorbing the costs and organisational disruption of AI experimentation without capturing the efficiency gains and service improvements that only come with scaled deployment.
One particularly striking data point concerns who benefits from government AI. The OECD categorised use cases by their primary beneficiary and found that only 4% of use cases allow external actors—citizens and businesses—to use government AI for their own purposes. Greece’s DidaktorikaAI, an AI-powered library of 50,000 publications, stands out as a rare example. The vast majority of government AI is internally focused, aimed at improving government processes rather than directly empowering citizens. This internal orientation represents both a limitation and an untapped opportunity.
AI Benefits for Government: Beyond Routine Automation
The OECD’s benefit categorisation reveals a more nuanced picture than the popular narrative of “AI automating boring tasks” suggests. The report identifies four non-mutually exclusive benefit categories, meaning individual use cases can deliver multiple types of value simultaneously.
The dominant category—representing 57% of use cases—is automating, streamlining, or tailoring government processes and services. This encompasses everything from processing permit applications to personalising citizen communications to automating document routing. At first glance, this confirms the automation narrative. But the second most common category challenges it directly: 45% of use cases enhance decision-making, sense-making, or forecasting. This analytical function—supporting human judgment with AI-powered insights, pattern recognition, and predictive modelling—represents a fundamentally different value proposition than simple task automation.
The third category, improving accountability and anomaly detection at 30% of use cases, reveals AI’s emerging role as a governance tool in itself. Fraud detection, compliance monitoring, audit support, and anomaly identification in government operations all fall under this umbrella. The potential for AI to improve government accountability—detecting irregularities that human auditors might miss, identifying patterns of non-compliance across large datasets—has profound implications for public trust and institutional integrity.
The OECD explicitly flags the counterintuitive finding: AI is used more for analytical tasks than for mundane, routine tasks in government. This challenges the widespread assumption that government AI is primarily about replacing clerical work and suggests instead that its highest-value applications lie in augmenting complex human judgment. For organisations navigating similar analytical challenges, our coverage of the McKinsey State of AI 2025 provides complementary perspectives on how enterprises are deploying AI for decision support.
Transform dense policy reports into interactive experiences that stakeholders actually engage with.
The Pilot Trap: Why Government AI Struggles to Scale
Perhaps the most important structural insight in the OECD report is the identification of what we might call the “pilot trap”—a vicious cycle that keeps government AI initiatives perpetually experimental. The mechanism is straightforward but pernicious: governments launch AI pilots, but lack the impact measurement frameworks needed to demonstrate return on investment. Without demonstrated ROI, they cannot justify the investment needed to scale successful pilots to full implementation. Without full implementation, they cannot generate the impact data needed to demonstrate ROI. The cycle perpetuates itself.
The statistics are stark: only 15% of governments had an AI investments framework in place as of 2023. This means 85% of governments are making AI investment decisions without structured frameworks for evaluating costs, benefits, and priorities. In any other area of public expenditure—infrastructure, defence, healthcare—this level of financial governance would be considered unacceptable. Yet for AI, which governments universally describe as strategically important, most operate without basic investment discipline.
The human capital dimension compounds the problem. Widespread skills gaps among civil servants prevent effective AI implementation even when funding is available. The challenge is not merely technical literacy—though that matters—but the deeper capacity to identify appropriate AI applications, manage AI vendors, interpret AI outputs critically, and govern AI systems responsibly. Without this distributed competency across government, AI initiatives depend on isolated pockets of expertise that cannot sustain organisational change at scale.
Legacy IT infrastructure creates a physical barrier to AI adoption. Government systems built over decades of incremental development often cannot interface with modern AI platforms without extensive modification. Outdated laws and regulations—not designed with AI in mind—create legal uncertainty that risk-averse government officials interpret as prohibition. Tight budgets constrain not just AI investment but the enabling infrastructure—data platforms, cloud services, cybersecurity upgrades—without which AI cannot function. And governments face unique requirements around privacy, transparency, and democratic representation that add layers of complexity absent from private sector deployments.
Risks and Challenges of AI in Public Services
The OECD report provides a comprehensive risk taxonomy that is notable for its breadth and its insistence that all 200 use cases analysed could carry risks if not adequately managed. This is not a qualified warning about edge cases—it is a blanket assessment that every government AI deployment requires active risk governance.
Ethical risks lead the taxonomy. Skewed or biased training data can cause AI systems to make decisions that systematically disadvantage particular populations—a concern with constitutional dimensions when the decision-maker is a government agency rather than a private company. Rights infringements from poorly designed or managed AI systems can erode the fundamental relationship between citizen and state. When a government AI system denies a benefit, flags a citizen for investigation, or prioritises one community over another based on biased data, the consequences extend beyond individual harm to institutional legitimacy.
Operational risks include the familiar territory of cyber threats and security vulnerabilities, but the OECD adds a less commonly discussed dimension: the propagation of errors through overreliance on AI systems. When human operators trust AI outputs uncritically, errors can cascade through government processes—a contaminated dataset feeding flawed decisions across multiple agencies and programmes simultaneously. This systemic error propagation risk is particularly acute in government, where decisions can affect millions of citizens and where error correction mechanisms are often slow and bureaucratic.
Governance and trust risks may be the most consequential in the long term. Lack of transparency in AI decision-making erodes accountability—citizens cannot challenge decisions they cannot understand, and oversight bodies cannot audit processes they cannot see. Overreliance on AI can widen digital divides, creating two-tier government services where digitally fluent citizens receive faster, more personalised service while others are left behind. Public resistance to government AI, already visible in multiple countries, can derail even well-designed initiatives if trust is not actively built and maintained.
Seven Enablers for Trustworthy Government AI
The OECD’s first major recommendation—strengthening AI enablers—identifies seven foundational elements that must be in place before governments can deploy AI effectively. These enablers are not sequential prerequisites but interconnected capabilities that reinforce each other. Weakness in any single enabler can undermine progress across all others.
Governance requires institutional structures and leadership that can coordinate AI strategy across government silos, set priorities, manage risks, and maintain accountability. Data access—the raw material of AI—demands not just technical infrastructure but governance frameworks that enable quality data sharing across agencies while respecting privacy and security constraints. Digital infrastructure means modern IT systems capable of supporting AI workloads, including cloud computing capacity, API integration capabilities, and cybersecurity posture appropriate for AI-enhanced threat environments.
Skills development must encompass not just technical AI expertise but the broader competencies needed for responsible AI governance: critical evaluation of AI outputs, vendor management, ethical assessment, and change management. Investment frameworks—absent in 85% of governments—must provide structured approaches to evaluating, prioritising, and tracking AI expenditure. Procurement processes designed for traditional goods and services must evolve to handle AI’s unique characteristics: its iterative development cycle, its dependency on data quality, its need for ongoing monitoring and adjustment, and its vendor lock-in risks.
The seventh enabler—partnerships with non-government actors—recognises that governments cannot develop AI capability in isolation. Collaboration with the private sector, academia, and civil society provides access to expertise, technology, and perspectives that government cannot generate internally. These partnerships must be structured to ensure that government retains the strategic autonomy and accountability that public service demands, while benefiting from the innovation and agility that external partners bring. For a practical look at how similar enablers play out in financial services, see our analysis of RAND’s report on AI, cybersecurity, and national security.
Make policy research accessible. Libertify turns OECD reports into interactive experiences your organisation will use.
Guardrails That Enable Rather Than Block Innovation
The OECD’s second recommendation introduces a nuanced perspective on AI governance that deserves careful attention: not every guardrail needs to apply to every use case. This principle of proportionality is perhaps the report’s most operationally important insight, directly addressing the risk aversion that paralyses many government AI initiatives.
The argument runs as follows: in the absence of clear, proportionate guidance, government officials default to the most restrictive interpretation of existing rules. This risk aversion, while individually rational, collectively prevents governments from realising AI’s benefits. The solution is not to remove guardrails but to calibrate them. A chatbot providing general information about government services requires different governance than an AI system making decisions about benefit eligibility or criminal justice risk scoring.
Guardrails operate through three mechanisms: policies that establish rules and expectations, transparency measures that enable oversight and accountability, and active oversight processes that monitor compliance and detect problems. The OECD emphasises that these mechanisms must be applied in a risk-based manner, with the intensity of governance proportional to the potential impact of the AI system on citizens’ rights, access to services, and institutional accountability.
The critical nuance is that overly rigid guardrails can be as harmful as insufficient ones. When compliance requirements are disproportionate to actual risk, they consume resources, delay deployment, and discourage innovation without meaningfully reducing harm. The result is a worst-of-both-worlds scenario: government retains the risks associated with antiquated manual processes (errors, delays, inconsistency, bias from human operators) while forgoing the benefits that well-governed AI could provide. The OECD’s call for context-appropriate governance is ultimately a call for sophisticated risk management rather than blanket restriction. This approach mirrors the EU AI Act’s risk-based classification framework, which applies different requirements based on AI systems’ potential impact levels.
Stakeholder Engagement and International Cooperation
The third recommendation—engaging stakeholders in a joined-up approach—addresses a challenge that technology alone cannot solve. AI systems that serve the public must be designed with the public’s needs in mind, and this requires more than technical consultation with AI vendors. It demands robust engagement with citizens, civil society organisations, businesses, and international partners.
The emphasis on user-centred, adaptive approaches reflects hard-won lessons from previous waves of government technology adoption. Digital government initiatives that failed to consider end-user needs produced systems that were technically functional but practically unusable, generating citizen frustration and eroding trust in government’s capacity to modernise. AI amplifies this risk because its outputs are less predictable and harder to evaluate than traditional software—making user feedback and stakeholder oversight even more critical.
Cross-border cooperation adds another dimension to stakeholder engagement. AI technologies, data flows, and vendor markets are inherently global, meaning that national government AI strategies operate within international contexts that they cannot control unilaterally. The OECD’s emphasis on international cooperation reflects the practical reality that AI governance challenges—from data privacy to algorithmic bias to vendor concentration—require coordinated responses across jurisdictions. Isolated national approaches risk creating regulatory fragmentation that benefits neither citizens nor innovation.
The engagement model the OECD envisions goes beyond traditional public consultation. It must be open, transparent, and substantive—not superficial checkbox exercises but genuine dialogue that influences AI system design, deployment decisions, and governance frameworks. For governments accustomed to developing technology policy behind closed doors, this represents a significant cultural shift, but one that the OECD argues is essential for maintaining public trust in an era of AI-powered government services.
Future-Proofing Government AI Strategy
The OECD’s fourth recommendation—anticipating and responding to future changes—addresses perhaps the most fundamental challenge of AI governance: the future application of AI technology remains unknown. The pace and direction of AI development is unpredictable, and strategies designed for today’s capabilities may be irrelevant or counterproductive within a few years. This is not a reason for paralysis but a mandate for agile, adaptive governance frameworks that can evolve with the technology.
The concept of detecting “weak signals”—emerging trends before they become entrenched—is particularly powerful. Government bureaucracies tend toward reactive policy-making, responding to problems after they have become visible and acute. The OECD argues that effective AI governance requires a fundamentally different posture: proactive monitoring of technological developments, adoption patterns, and risk indicators, with the institutional capacity to make timely interventions before trends become locked in and difficult to shift.
This future-proofing imperative connects directly to the seven enablers discussed earlier. Without strong data infrastructure, governments cannot monitor AI developments effectively. Without skilled civil servants, they cannot interpret weak signals correctly. Without agile procurement frameworks, they cannot adjust their technology portfolio in response to changing conditions. Without adequate investment frameworks, they cannot redirect resources to emerging priorities. The enablers are not just foundations for current AI deployment but prerequisites for adaptive governance that can navigate an uncertain AI future.
The OECD Framework for Trustworthy AI in Government provides the overarching structure for implementing all four recommendations. It represents the most comprehensive international blueprint for government AI adoption currently available, combining empirical evidence from 200 use cases with actionable guidance that bridges the gap between strategy and implementation. For governments trapped in the pilot phase, struggling with skills gaps, or uncertain about appropriate governance, this framework offers a structured path forward that balances ambition with responsibility.
The broader message is clear: AI in government is not a technology problem alone. It is a governance problem, a skills problem, an investment problem, and a trust problem—all simultaneously. Governments that address AI as merely a technology procurement challenge will continue to produce expensive pilots that never scale. Those that treat it as a systemic transformation—requiring coordinated action across governance, data, infrastructure, skills, investment, procurement, and partnerships—will be the ones that deliver on AI’s promise to improve public services, strengthen accountability, and rebuild citizen trust in government institutions.
Share OECD insights across your organisation. Transform policy documents into interactive knowledge experiences with Libertify.
Frequently Asked Questions
How are governments using artificial intelligence in 2025?
According to the OECD analysis of 200 AI use cases, 57% of government AI applications automate, streamline, or tailor public services. 45% enhance decision-making, sense-making, or forecasting. 30% improve accountability and anomaly detection. Surprisingly, AI is used more for analytical tasks than routine automation, challenging common narratives about AI replacing clerical work.
What percentage of governments have AI investment frameworks?
Only 15% of governments had an AI investments framework in place as of 2023, according to OECD data. This stark figure reveals significant institutional unpreparedness for scaling AI adoption and explains why many government AI initiatives remain stuck in the pilot phase, unable to demonstrate ROI to justify further investment.
What are the OECD’s key recommendations for government AI adoption?
The OECD makes four key recommendations: strengthen seven AI enablers (governance, data, infrastructure, skills, investment, procurement, and partnerships), establish proportionate guardrails through policies and oversight, engage all stakeholders in a joined-up approach, and anticipate future changes with agile adaptive strategies that detect weak signals early.
What are the biggest barriers to AI adoption in government?
The main barriers include lack of impact measurement frameworks creating a vicious cycle of underinvestment, widespread skills gaps among civil servants, legacy IT systems incompatible with modern AI, tight budgets, outdated laws and regulations, data access and sharing challenges, and stricter requirements for privacy, transparency, and democratic representation compared to the private sector.
What risks does AI pose in government services?
Key risks include biased data causing harmful decisions, rights infringements from poorly designed AI, cyber security vulnerabilities, error propagation through overreliance on AI systems, lack of transparency eroding accountability, widening digital divides with unequal citizen access, and reduced public trust when risks are not properly managed. The OECD emphasises that all 200 use cases analysed could carry risks if not adequately managed.