GAO AI Federal Requirements: 94 Rules Guiding Government Artificial Intelligence
Table of Contents
- Understanding the GAO AI Federal Requirements Report
- The 94 AI Requirements Across Federal Agencies
- Five Federal Laws Shaping AI Governance
- Executive Orders and the Shifting AI Policy Landscape
- OMB Guidance and Federal AI Acquisition Standards
- Ten AI Oversight and Advisory Groups Explained
- Federal AI Implementation Gaps and GAO Recommendations
- Chief AI Officers and Agency-Level AI Strategy
- AI Workforce Training and Talent Development
- Implications for AI Governance and the Road Ahead
📌 Key Takeaways
- 94 AI Requirements Identified: GAO catalogued 94 government-wide AI requirements from five laws, six executive orders, and three guidance documents applicable to federal agencies.
- Ten Oversight Bodies: Ten executive branch groups—including OSTP, NAIAC, and OMB—oversee AI implementation across the federal government, each with distinct mandates.
- Policy Shifts Under Two Administrations: From Biden’s EO 14110 with 100+ requirements to Trump’s EO 14148 rescinding it and EO 14179 promoting American AI leadership, the regulatory landscape changed dramatically between 2023 and 2025.
- Implementation Gap: Only 4 of 35 GAO recommendations have been implemented by 3 agencies, while 16 agencies still have outstanding obligations as of July 2025.
- Mandatory Chief AI Officers: Every federal agency must now designate a chief AI officer responsible for coordinating AI strategy, use case inventories, and compliance with evolving federal AI requirements.
Understanding the GAO AI Federal Requirements Report
Artificial intelligence is transforming how the United States federal government operates, from streamlining permitting processes to enhancing national security capabilities. The Government Accountability Office’s landmark report, GAO-25-107933, published in September 2025, provides the most comprehensive mapping to date of the AI federal requirements landscape governing how agencies develop, acquire, and deploy artificial intelligence technologies. This analysis arrives at a critical juncture, as agencies navigate a rapidly evolving policy environment shaped by competing legislative mandates and executive directives.
The GAO report describes two primary dimensions of federal AI governance: the specific requirements embedded in laws, executive orders, and guidance documents, and the organizational infrastructure of oversight and advisory bodies responsible for steering implementation. Together, these elements form the scaffolding of what may become the world’s most extensive government AI governance framework. For organizations and researchers tracking how major governments approach AI regulation, this report establishes the baseline against which future progress will be measured. Understanding these AI federal requirements is essential for anyone working at the intersection of technology and public policy, as the framework will influence how billions of dollars in federal AI investment are managed and monitored.
The stakes are significant. According to the National Institute of Standards and Technology (NIST), AI technologies can drive economic growth and support scientific advancements—but they also pose risks that can negatively impact individuals, communities, and entire sectors. The GAO’s work thus serves as both an inventory of obligations and an accountability tool, ensuring that the promise of AI in government translates into responsible, effective deployment. For a broader understanding of how AI safety intersects with government responsibility, see our analysis of AI safety as a global public good.
The 94 AI Requirements Across Federal Agencies
At the heart of GAO-25-107933 lies a meticulous inventory: 94 discrete AI-related requirements that are either government-wide in scope or carry government-wide implications. These requirements were drawn from five federal laws, six executive orders, and three Office of Management and Budget guidance memoranda—all current as of July 2025. The breadth of this regulatory architecture underscores the extent to which AI governance has moved from theoretical discussion to concrete operational mandate.
The requirements span a wide spectrum of activities. Some are structural, such as the mandate for every federal agency to designate a chief AI officer. Others are operational, including the annual preparation and submission of AI use case inventories. Still others are strategic, such as the requirement for agencies to develop and publicly release an AI strategy by September 30, 2025. Procurement requirements are also extensive, with agencies required to adhere to new guidance for AI contracts and ensure that procured AI models comply with unbiased AI principles.
The distribution of these 94 AI federal requirements across agencies is not uniform. The Executive Office of the President carries the overarching mandate to execute the National AI Initiative. The Office of Management and Budget bears the heaviest individual load, responsible for providing guidance on AI use case inventories, management, oversight, regulation, and best practices. It must also obtain and review agency-submitted inventories, develop implementation templates, provide workforce training programs, and issue guidance on procuring models with unbiased AI principles. The Office of Science and Technology Policy is charged with establishing and leading the National AI Initiative Office and providing technical support to interagency committees.
The Department of Commerce, through NIST and the National AI Advisory Committee, must support the development of technical standards, advise the President on AI and workforce issues, and maintain a risk management framework for trustworthy AI systems. The General Services Administration (GSA) is required to establish the AI Center of Excellence, develop acquisition guidance, and facilitate knowledge sharing across agencies. Meanwhile, the National Science Foundation manages AI scholarship programs and research institute networks. Individual federal agencies must comply with requirements around inventories, standards adherence, R&D prioritization, barrier identification, policy development, and chief AI officer designation.
Five Federal Laws Shaping AI Governance
The legislative foundation of federal AI governance rests on five key laws enacted between 2020 and 2022. Each introduced distinct requirements that continue to shape how agencies approach artificial intelligence adoption. Understanding these laws is essential for comprehending the full scope of AI federal requirements.
The AI in Government Act of 2020, enacted as part of the Consolidated Appropriations Act of 2021, established the foundational expectation that federal AI use must be effective, ethical, and accountable. It mandated the creation of resources and guidance for federal agencies and required the establishment of the AI Center of Excellence within GSA. This law fundamentally shifted AI from a discretionary agency initiative to a government-wide obligation.
The National Artificial Intelligence Initiative Act of 2020 created the National AI Initiative itself—a coordinated program across the entire federal government to accelerate AI research and development. It established the National AI Advisory Committee, the National AI Initiative Office within OSTP, and various interagency subcommittees focused on areas such as AI standards, international engagement, and emerging technology. The act also mandated investments in AI research institutes and workforce development programs through the National Science Foundation.
The Advancing American AI Act, enacted in December 2022 as part of the National Defense Authorization Act for Fiscal Year 2023, pushed agencies to actively adopt modernized business practices and harness applied AI for mission effectiveness. The AI Training for the Acquisition Workforce Act of 2022 addressed the human capital dimension, ensuring that federal procurement professionals understand AI capabilities and risks. Finally, the CHIPS Act of 2022 directed the establishment of semiconductor fabrication assistance programs through the Department of Commerce, recognizing that AI capabilities depend fundamentally on advanced chip manufacturing infrastructure.
Together, these five laws created a multi-layered legislative framework that addresses research, development, deployment, procurement, workforce readiness, and the supply chain underpinnings of artificial intelligence in government.
Transform complex government reports into interactive experiences your team will actually read.
Executive Orders and the Shifting AI Policy Landscape
Perhaps no aspect of federal AI governance has been more dynamic than the executive order landscape. Between 2019 and 2025, six executive orders reshaped the federal approach to artificial intelligence—sometimes in diametrically opposite directions. This policy volatility has created both opportunities and challenges for agencies attempting to implement AI responsibly.
The sequence began with EO 13859 in February 2019, which established the American AI Initiative and promoted government-wide AI research and development investment. In December 2020, EO 13960 built on this foundation by establishing common principles for trustworthy AI—including transparency, accountability, and fairness—in the design, development, acquisition, and use of AI across the federal government.
A significant inflection point came in October 2023 with President Biden’s EO 14110, which imposed over 100 requirements for federal agencies and represented the most ambitious attempt to date at comprehensive AI governance. The order addressed everything from AI safety testing to workforce impacts, from privacy protections to the responsible use of AI in national security contexts. However, in January 2025, President Trump’s EO 14148 rescinded EO 14110 entirely, eliminating dozens of requirements in a single stroke.
The Trump administration quickly replaced this framework with its own vision. EO 14179, also issued in January 2025, called for the development of an AI action plan focused on removing barriers to American leadership in AI and revising OMB memoranda. By July 2025, the White House had published America’s AI Action Plan with recommended policy actions for federal agencies. Additional executive orders followed: EO 14318 accelerated federal permitting for data center infrastructure critical to AI workloads, EO 14319 mandated that federally procured AI models prioritize truthfulness and ideological neutrality, and EO 14320 promoted the export of American AI technology. The cybersecurity implications of these shifting policies are explored in depth in our analysis of Microsoft’s Digital Defense Report 2025.
This rapid policy oscillation means that the 94 AI federal requirements documented by GAO represent a snapshot of a moving target. Agencies must maintain sufficient organizational agility to respond to executive directives while simultaneously complying with more stable legislative requirements.
OMB Guidance and Federal AI Acquisition Standards
The Office of Management and Budget occupies a unique position in the federal AI governance architecture: it serves as the primary translator between high-level policy directives and practical agency implementation. Three OMB memoranda form the current guidance framework governing how agencies use and acquire artificial intelligence.
Memorandum M-21-06, Guidance for Regulation of Artificial Intelligence Applications, established the initial framework for how agencies should approach AI regulation. It emphasized the importance of non-regulatory approaches where possible and set principles for federal agencies developing regulatory and non-regulatory actions related to AI.
In April 2025, OMB issued Memorandum M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust, which replaced the earlier M-24-10. This memorandum represents the current standard for federal AI governance, requiring agencies to establish generative AI policies, maintain AI use case inventories, designate chief AI officers, and develop agency-wide AI strategies. It balances the imperative to accelerate AI adoption with the need for responsible governance frameworks.
Concurrently, Memorandum M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government, replaced M-24-18 and provides the operational guidance agencies need to procure AI responsibly. This memorandum addresses contract structures, evaluation criteria, vendor requirements, and the specific obligation to ensure that procured AI models comply with unbiased principles. Together, these three memoranda create a comprehensive governance layer that sits between legislative mandates and agency-level implementation.
For agencies, the practical implications are substantial. Every AI procurement must now consider not only technical performance but also compliance with trustworthiness standards, bias mitigation requirements, and ongoing inventory obligations. The acquisition workforce itself must be trained in AI-specific capabilities and risks—a requirement that traces back to the AI Training for the Acquisition Workforce Act.
Ten AI Oversight and Advisory Groups Explained
The second major finding of GAO-25-107933 identifies the ten executive branch oversight and advisory groups responsible for AI implementation and governance across the federal government. These bodies were established through a combination of federal law, executive orders, and White House action, creating a multi-layered institutional structure.
The Office of Science and Technology Policy (OSTP), established by Congress in 1976 and located within the Executive Office of the President, serves as the primary advisor to the President on science and technology matters, including artificial intelligence. OSTP’s AI responsibilities include establishing and leading the National AI Initiative Office and providing technical support to interagency committees.
The National AI Advisory Committee (NAIAC), established under the Department of Commerce, brings together members from industry, academia, civil society, and government to advise the President and the National AI Initiative Office on AI-related issues. The committee’s purview spans workforce development, technical standards, international competitiveness, and the societal impacts of AI deployment. For context on how different nations approach AI governance differently, our analysis of Chatham House’s competing visions for international order examines the geopolitical dimensions.
The National Science and Technology Council (NSTC) coordinates science and technology policy across the executive branch, with specific AI subcommittees addressing research priorities, standards development, and cross-agency coordination. The GSA AI Center of Excellence provides practical implementation support, helping agencies navigate AI adoption challenges through shared resources, best practices, and acquisition expertise.
Additional oversight bodies include the National AI Initiative Office within OSTP, which serves as the operational hub for the National AI Initiative; OMB’s Office of the Federal Chief Information Officer, which manages AI governance through the lens of information technology policy; and various interagency subcommittees focused on specific aspects of AI policy such as research funding through NSF, workforce development, and international standards engagement. Together, these ten bodies create a governance network that—at least in theory—ensures that no aspect of federal AI implementation operates without institutional oversight.
Make federal AI reports and policy documents accessible with interactive Libertify experiences.
Federal AI Implementation Gaps and GAO Recommendations
Despite the extensive regulatory and institutional framework documented in GAO-25-107933, the report reveals a significant implementation gap. In its earlier December 2023 review, GAO found that while most agencies had developed AI use case inventories—comprising approximately 1,200 current and planned use cases—there were instances of incomplete and inaccurate data. More critically, many agencies had not fully implemented the requirements outlined in executive orders and federal law.
GAO made 35 specific recommendations to 19 federal agencies, including OMB itself. The response was mixed: ten agencies agreed with the recommendations, three partially agreed with one or more, four neither agreed nor disagreed, and two agencies disagreed with at least one recommendation. As of July 2025, only four of the 35 recommendations had been implemented—by just three agencies: OMB, the Office of Personnel Management, and the Department of Transportation.
The implemented recommendations illustrate what progress looks like in practice. The Office of Personnel Management created an inventory of federal rotational programs and assessed how they could expand AI workforce expertise. The Department of Transportation developed and submitted AI consistency plans to OMB. But these successes are the exception. Sixteen agencies still have outstanding recommendations, and the gap between requirement and implementation remains wide.
This implementation deficit is particularly concerning given the pace of AI advancement. Each month of delay in establishing proper governance frameworks increases the risk that agencies will deploy AI systems without adequate oversight, testing, or bias mitigation. The GAO’s AI Accountability Framework, published in June 2021, established four principles—governance, data, performance, and monitoring—that should guide implementation. Yet the gap between framework and practice persists. For a perspective on how other sectors address technology governance challenges, see our coverage of UNCTAD’s Technology and Innovation Report 2025.
Chief AI Officers and Agency-Level AI Strategy
One of the most consequential AI federal requirements is the mandate for every federal agency to designate a chief AI officer (CAIO). This requirement, reinforced through both executive orders and OMB guidance, represents a structural shift in how agencies approach artificial intelligence—elevating AI governance from an IT function to a C-suite responsibility.
The CAIO role encompasses several critical functions. These officers are responsible for coordinating agency-wide AI strategy, ensuring compliance with federal requirements, managing AI use case inventories, and serving as the point of contact for interagency AI coordination. They must also balance innovation with risk management, ensuring that agency AI deployments meet standards for trustworthiness, transparency, and bias mitigation.
The strategic planning requirement extends beyond the CAIO appointment. By September 30, 2025, all covered federal agencies were required to develop and publicly release an AI strategy. These strategies must address how agencies will identify AI opportunities, manage associated risks, build workforce capacity, ensure responsible procurement, and integrate AI into mission-critical operations. The deadline created a significant planning burden for agencies that had not previously formalized their AI approaches.
The CAIO mandate also has workforce implications. Effective AI governance requires not only technical expertise but also policy acumen, risk assessment capabilities, and the ability to communicate complex technology decisions to non-technical stakeholders. Many agencies have struggled to recruit and retain individuals with this combination of skills, particularly given competition from the private sector where AI talent commands premium compensation.
Agency-level AI strategies must also contend with the reality of rapid policy change. A strategy developed under one set of executive orders may need significant revision when those orders are rescinded or replaced. This dynamic environment demands organizational agility and governance frameworks that can adapt to shifting priorities while maintaining consistency with more stable legislative requirements.
AI Workforce Training and Talent Development
The AI Training for the Acquisition Workforce Act of 2022 recognized a fundamental truth: the federal government cannot effectively govern what its workforce does not understand. The law mandated that federal procurement professionals receive training on AI capabilities and risks, ensuring that acquisition decisions are informed by technical literacy rather than vendor promises alone.
This workforce development imperative extends beyond acquisition. OMB’s guidance requires agencies to build AI competency across multiple functions, from policy analysis to program management to technical implementation. The National Science Foundation’s AI Scholarship for Service program represents one pathway, providing financial assistance to students pursuing AI-related education in exchange for federal service commitments. NSF is also charged with establishing a network of AI research institutes that serve both as talent pipelines and as sources of cutting-edge research applicable to government missions.
The challenge is scale. The federal civilian workforce numbers approximately 2.3 million employees, and integrating AI literacy across this population requires sustained investment in training infrastructure, curriculum development, and ongoing professional education. Some agencies have made progress—the Office of Personnel Management’s inventory of rotational programs, implemented in response to GAO recommendations, represents an effort to leverage existing structures for AI workforce development.
International benchmarking provides useful context. Other governments face similar challenges in building AI-capable workforces, and the approaches vary significantly. The United Kingdom’s AI Safety Institute, the European Union’s AI Office, and Singapore’s National AI Strategy all include workforce components. The interplay between global AI talent competition and domestic workforce development is explored in our analysis of ENISA’s cybersecurity threat landscape, where workforce shortages intersect with technology governance. Federal agencies must compete not only with each other but with the private sector and international employers for a limited pool of AI professionals.
Implications for AI Governance and the Road Ahead
GAO-25-107933 paints a picture of a federal government that has built an ambitious regulatory and institutional framework for AI governance—but has yet to translate that framework into consistent operational reality. The 94 requirements represent genuine progress in establishing expectations. The ten oversight bodies provide institutional infrastructure. Yet the implementation gap—with only four of 35 recommendations fulfilled—suggests that the hardest work lies ahead.
Several factors will shape whether the gap narrows or widens. First, the stability of executive policy matters enormously. The rapid oscillation between Biden’s comprehensive EO 14110 and Trump’s rescission-and-replacement approach created uncertainty that made long-term planning difficult for agencies. Future administrations will face the same tension between policy continuity and political priorities.
Second, resource allocation will prove decisive. AI governance requirements cannot be met without adequate staffing, technical infrastructure, and training investment. Agencies that lack dedicated AI governance budgets will struggle to comply with inventory, strategy, and procurement requirements—regardless of how clear the mandates may be.
Third, accountability mechanisms must evolve. GAO’s role as an independent auditor is essential, but its recommendations carry moral authority rather than enforcement power. The 16 agencies with unimplemented recommendations face no formal penalties for non-compliance. Strengthening the connection between GAO findings and agency consequences—whether through congressional oversight, budget implications, or reporting requirements—would accelerate implementation.
Fourth, the international dimension will increasingly influence domestic AI governance. As other major economies develop their own AI regulatory frameworks—the EU AI Act being the most prominent example—U.S. federal agencies will need to consider interoperability, mutual recognition, and competitive positioning. The relationship between domestic governance and international AI competition is explored in CrowdStrike’s 2025 Global Threat Report analysis, where nation-state AI capabilities and governance intersect with cybersecurity threats.
For the broader technology policy community, GAO-25-107933 provides an invaluable reference document. It maps the complete terrain of federal AI obligations, identifies the institutional players, and—through the lens of unfulfilled recommendations—highlights where attention and resources are most urgently needed. As AI capabilities continue to advance at exponential pace, the question is not whether government needs robust governance frameworks, but whether those frameworks can keep pace with the technology they are meant to govern.
Turn any policy document or government report into an engaging interactive experience with Libertify.
Frequently Asked Questions
What are the 94 AI federal requirements identified by GAO?
GAO-25-107933 identified 94 AI-related government-wide requirements drawn from five federal laws, six executive orders, and three OMB guidance documents. These requirements cover AI use case inventories, chief AI officer appointments, AI strategy development, procurement guidance, workforce training, and standards for trustworthy AI systems across all federal agencies.
Which federal agencies oversee AI implementation in the U.S. government?
Ten executive branch oversight and advisory groups oversee federal AI implementation. Key bodies include the Office of Science and Technology Policy (OSTP), the National AI Advisory Committee (NAIAC) under the Department of Commerce, the Office of Management and Budget (OMB), the General Services Administration’s AI Center of Excellence, and the National Science and Technology Council’s AI subcommittees.
How did executive orders change federal AI policy from 2019 to 2025?
Federal AI policy evolved through multiple executive orders: EO 13859 (2019) established the American AI Initiative; EO 13960 (2020) set trustworthy AI principles; EO 14110 (2023) imposed over 100 requirements for safe AI use; EO 14148 (2025) rescinded EO 14110; and EO 14179 (2025) called for a new AI action plan removing barriers to American AI leadership.
What is the role of the National AI Advisory Committee?
The National AI Advisory Committee (NAIAC), established under the Department of Commerce, advises the President and the National AI Initiative Office on AI-related issues including workforce development, technical standards, international competitiveness, and responsible AI deployment. The committee includes members from industry, academia, and civil society.
How many GAO recommendations on federal AI have been implemented?
As of July 2025, only four out of 35 GAO recommendations to 19 federal agencies have been implemented. Three agencies—OMB, the Office of Personnel Management, and the Department of Transportation—completed their recommendations. The remaining 16 agencies have not yet implemented the outstanding 31 recommendations.