US AI Regulation: Federal Policy Approaches Explained

📌 Key Takeaways

  • No Comprehensive Federal AI Law: Despite hundreds of bills introduced since the 115th Congress, fewer than 30 have been enacted, with most embedded in appropriations or defense legislation rather than standalone AI regulation.
  • Three Regulatory Pathways: The CRS identifies three distinct approaches — regulating AI technology directly, regulating AI use across sectors, and regulating AI within specific sectors — each with unique tradeoffs for innovation and safety.
  • Federal Agencies Report 1,990+ AI Use Cases: Government AI adoption is accelerating, with 337 use cases flagged as rights-impacting or safety-impacting, raising urgent questions about oversight and accountability.
  • EU AI Act Sets Global Standard: The EU’s risk-based classification system with penalties up to 7% of global revenue represents the most comprehensive AI regulatory framework, contrasting sharply with the US voluntary approach.
  • States Leading in Absence of Federal Action: At least 48 states introduced over 1,000 AI-related bills in 2025, creating a growing patchwork of regulations that may ultimately force federal preemption or harmonization.

Understanding the US AI Regulation Landscape

The United States stands at a critical inflection point in artificial intelligence governance. As AI systems become deeply embedded in hiring decisions, healthcare diagnostics, financial lending, criminal justice, and national security operations, the question of how to regulate these technologies has moved from academic debate to urgent policy priority. The Congressional Research Service (CRS) report on regulating artificial intelligence provides the most authoritative analysis of where US AI regulation currently stands and where it may be headed.

What makes the current moment particularly significant is the fundamental tension between two competing imperatives. On one side, proponents of regulation argue that clear rules reduce legal uncertainty for developers, improve public trust, and protect against discriminatory outcomes. On the other, opponents contend that additional regulation stifles innovation, particularly burdens smaller companies and startups, and puts the United States at a competitive disadvantage against China and other nations racing to dominate AI development.

Stanford University’s Cyber Policy Center captured this tension in September 2024 when it noted that AI regulation “is both urgently needed and unpredictable. It also may be counterproductive, if not done well.” This observation underscores the challenge facing policymakers: the stakes of getting AI regulation wrong — whether through overreach or inaction — are enormous. For organizations tracking how governments worldwide approach AI governance, understanding the US regulatory landscape is essential context for strategic planning.

The CRS report reveals that the US approach to AI regulation has been characterized by fragmentation, incrementalism, and an emphasis on voluntary measures rather than binding obligations. Unlike the European Union, which enacted the comprehensive EU AI Act in 2024, the United States has relied primarily on executive orders, agency guidance, and industry self-regulation. This approach has both advantages and significant limitations that merit careful examination.

Defining Artificial Intelligence for Policy Purposes

Before any regulatory framework can function effectively, policymakers must answer a deceptively complex question: what exactly is artificial intelligence? The CRS report highlights that no single, widely agreed-upon definition of AI exists, and this definitional challenge has profound implications for US AI regulation efforts.

Congress previously defined AI in the National AI Initiative Act of 2020 as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” However, the rapid emergence of generative AI systems like large language models prompted the OECD to update its own definition in March 2024 to explicitly reference content generation alongside predictions, recommendations, and decisions.

The updated OECD definition, adopted by 47 countries including the United States, now describes an AI system as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” This expanded language reflects the reality that modern AI systems do far more than make predictions — they create text, images, code, and synthetic media that can be virtually indistinguishable from human-produced content.

The definitional challenge extends beyond semantics. As the US-EU Trade and Technology Council emphasized, AI terminology is “pivotal to cooperation” and “different terminologies express distinct technological cultures.” When legislation uses the term “artificial intelligence” versus “automated decision system” versus “algorithmic system,” the scope of regulation changes dramatically. Some analysts argue that focusing on “automated systems” rather than “AI” better captures the full range of concerns, since many problematic automated decisions use relatively simple algorithms that might not qualify as “AI” under narrow definitions. This seemingly technical debate has real consequences for which systems face regulatory scrutiny and which operate without oversight.

Three Federal Approaches to AI Regulation

The CRS report identifies three distinct approaches that Congress and federal agencies have considered for regulating artificial intelligence. Each represents a fundamentally different philosophy about how government should interact with rapidly evolving technology, and understanding these approaches is crucial for anyone following US AI regulation developments.

The first approach involves regulating AI technologies directly, using technical thresholds to trigger specific requirements. The most prominent example was President Biden’s Executive Order 14110, which required companies to report when they trained AI models using computing power exceeding 10^26 floating-point operations (FLOPs). At the time of the executive order, this threshold exceeded the computational power of all existing models, though GPT-4 came close. By May 2025, numerous models surpassed the lower EU threshold of 10^25 FLOPs, and xAI’s Grok-3 exceeded even the 10^26 benchmark. This approach offers targeted accountability but faces a critical limitation: rigid technical thresholds may quickly become outdated as AI capabilities evolve.

The second approach focuses on regulating the use of AI across all sectors through technology-neutral rules aimed at outcomes rather than specific technologies. The proposed Algorithmic Accountability Act exemplified this approach by directing the FTC to require impact assessments of automated decision systems from large companies. This framework has the advantage of adaptability — it does not need to be rewritten every time a new AI technology emerges — but critics note that applying uniform rules across vastly different sectors may result in regulations that are simultaneously too broad for some contexts and insufficient for others.

The third approach targets AI regulation within specific sectors such as financial services, healthcare, and elections. Bills like the Preventing Deep Fake Scams Act for financial services and the Fraudulent AI Regulations Elections Act represent tailored responses to sector-specific risks. This approach allows regulations to be calibrated to the unique characteristics and risk profiles of individual industries, but it can create gaps where AI applications span multiple sectors or emerge in areas not yet covered by existing sectoral frameworks.

Understanding how these approaches interact with broader AI diffusion trends helps contextualize why the regulatory challenge is so complex. AI systems rarely confine themselves to single sectors, and a model trained for one purpose can be repurposed for entirely different applications with minimal modification.

Transform complex policy reports into interactive experiences your team will actually engage with.

Try It Free →

Congressional AI Activity and Legislative Progress

Despite the urgency of the AI governance challenge, Congressional action has been notably incremental. The CRS report documents that hundreds of bills including “artificial intelligence” have been introduced since the 115th Congress, yet fewer than 30 have been enacted as of May 2025. Perhaps more revealing is that nearly half of these enacted laws consisted of AI provisions embedded within larger appropriations or National Defense Authorization Act (NDAA) legislation rather than standalone AI bills.

The 118th Congress undertook several structural efforts to build institutional knowledge about AI. The Bipartisan House Task Force on Artificial Intelligence produced a comprehensive report with 66 key findings and 89 recommendations spanning 15 chapters. The Senate hosted nine AI Insight Forums designed to educate members on AI technologies and their implications. These educational initiatives represented important groundwork but did not translate into major legislative action.

The House Task Force adopted an important analytical principle: “AI issue novelty.” This framework asks whether a given policy issue is “truly new for AI due to capabilities that did not previously exist” — a threshold designed to prevent duplicative mandates where existing laws may already provide adequate coverage. This principle reflects the broader tension in US AI regulation between those who believe existing legal frameworks can be applied to AI and those who argue that AI creates genuinely novel challenges requiring entirely new regulatory architectures.

Much of the proposed legislation has emphasized voluntary guidelines, best practices, and industry self-reporting rather than prohibitions or independent evaluations. The NIST AI Risk Management Framework, for example, provides a voluntary structure for organizations to identify and manage AI risks, but compliance is not mandatory. This voluntary approach reflects a deliberate policy choice to prioritize innovation and industry flexibility, but it also means that the most consequential AI deployments may operate without meaningful external oversight.

The Trump Administration’s AI Action Plan request for information received over 8,700 comments by its March 2025 deadline, signaling intense stakeholder interest in the direction of US AI regulation. The administration has signaled support for “pro-economic-growth AI policies, potentially with a comparatively smaller federal role” and avoidance of “an overly precautionary regulatory regime.” Industry leaders, including CEOs of major AI companies, have themselves called for some regulation, though analysts note that industry recommendations may be designed to serve company interests rather than broader public welfare.

Executive Branch Actions Across Administrations

The executive branch has been the most active arena for US AI regulation, though the policy direction has shifted dramatically between administrations. President Biden’s Executive Order 14110, issued in October 2023, directed more than 50 federal agencies to engage in over 100 specific actions across eight policy areas. This sweeping order represented the most comprehensive executive action on AI to date, establishing reporting requirements, safety standards, and guidance for federal AI procurement and deployment.

However, E.O. 14110 was revoked on January 20, 2025 — the first day of the Trump Administration — signaling a fundamental philosophical shift in the federal approach to AI governance. Where the Biden Administration emphasized safety, security, and trustworthiness alongside innovation, the new administration has prioritized economic growth, competitiveness, and reduced regulatory burden. This abrupt policy reversal illustrates the vulnerability of executive-order-based governance: policies that take months to develop and implement can be eliminated overnight with a change in administration.

Federal agencies have also been increasingly adopting AI themselves. As of January 2025, agencies reported over 1,990 current and planned AI use cases, with 337 identified as rights-impacting or safety-impacting. The top categories include mission-enabling internal agency support, health and medical applications, and government services related to benefits and service delivery. The Government Accountability Office (GAO) made 35 recommendations to 19 federal agencies regarding AI oversight in December 2023, and as of May 2025, 31 of those recommendations remained open — suggesting that even internal government AI governance faces significant implementation challenges.

The shift in focus from AI safety to AI security has been particularly notable. Both the US and UK AI Safety Institutes, established following the November 2023 UK AI Safety Summit, have reoriented their missions. The UK renamed its institute the AI Security Institute in February 2025, while the US NIST reportedly removed AI safety, responsible AI, and AI fairness skills from partner agreements. This evolution reflects a broader geopolitical framing of AI as a national security asset rather than primarily a consumer protection concern.

EU AI Act: The Global Regulatory Benchmark

No discussion of US AI regulation is complete without examining the European Union’s AI Act, which represents the most comprehensive AI regulatory framework in the world. Formally signed in June 2024 and entering into force on August 1, 2024, the EU AI Act employs a risk-based classification system that categorizes AI applications into four tiers with correspondingly different regulatory requirements.

At the highest level, certain AI practices are deemed to pose unacceptable risk and are outright banned. These include AI systems that engage in harmful manipulation or deception, social scoring systems, individual criminal risk prediction, untargeted facial recognition database creation, emotion recognition in workplaces and educational institutions, and real-time biometric identification for law enforcement in public spaces. These prohibitions, which took effect on February 2, 2025, represent the strongest regulatory stance any major jurisdiction has taken on AI.

High-risk AI systems — those used in critical infrastructure, education, employment, financial services, law enforcement, and migration — are authorized but subject to stringent requirements including pre-market conformity assessments and ongoing post-market monitoring. These obligations will apply from August 2026. AI systems posing transparency risks must inform users when they are interacting with AI, with these rules applying from August 2025. The vast majority of AI applications fall into the minimal or no risk category and face no additional requirements.

The EU AI Act also addresses general-purpose AI (GPAI) models, requiring all providers to maintain technical documentation and comply with EU copyright law. Models exceeding 10^25 FLOPs are presumed to carry systemic risk and face additional obligations. The financial penalties for non-compliance range from €7.5 million (or 1.5% of global annual turnover) to €35 million (or 7% of global annual turnover), making the stakes of non-compliance substantial for major technology companies.

The contrast with the US approach is stark. Where the EU has enacted binding, comprehensive legislation with significant penalties, the US continues to rely primarily on voluntary frameworks, sector-specific guidance, and industry self-regulation. Supporters of the EU model argue that it provides regulatory certainty that actually benefits developers, while critics contend that the compliance burden may deter innovation — noting that companies like Meta and Apple have reportedly declined to launch certain AI products in the EU market.

Make regulatory analysis accessible — turn dense policy documents into engaging interactive experiences.

Get Started →

UK and China: Alternative Governance Models

Beyond the US and EU, two other major approaches to AI governance offer important comparative perspectives. The United Kingdom has pursued a principles-based, non-statutory framework that relies on existing sector-specific regulators rather than creating new AI-specific legislation. In 2023, the UK published “AI Regulation: A Pro-Innovation Approach,” establishing five principles — safety, transparency, fairness, accountability, and contestability — that existing regulators are expected to incorporate into their oversight activities.

The UK model has the advantage of flexibility and speed, since existing regulators can adapt their approaches without waiting for new legislation. However, the incoming Labour government’s January 2025 “AI Opportunities Action Plan” signaled a potential shift toward more structured regulation, and a bill introduced in the House of Lords would create a dedicated AI Authority. This evolution suggests that even jurisdictions initially committed to light-touch approaches may find that voluntary principles are insufficient as AI systems become more powerful and pervasive.

China has taken a markedly different path, implementing targeted, technology-specific regulations backed by heavy government involvement in AI development. The 2023 Deep Synthesis Rule regulates deep fake technologies, while the 2023 Generative AI Measures address risks from public-facing generative AI services. China’s March 2024 Measures for Labelling AI-Generated Content require both implicit metadata labeling and explicit user-perceivable labeling of AI-generated content, with enforcement beginning September 2025.

What distinguishes China’s approach is the degree of government involvement in the AI industry itself. Through controlling stakes in firms, direct subsidies, and government guidance funds for early-stage AI companies, the Chinese government plays a role in AI development that has no parallel in Western democracies. Analysts debate whether this model is more efficient or leads to misallocated capital and market distortions, but the Stanford AI Index 2025 notes that while “the U.S. still leads in producing top AI models,” China is “rapidly closing the performance gap.” For anyone studying how geopolitical dynamics shape technology policy, the US-China AI competition provides a critical case study.

State AI Laws Filling the Federal Vacuum

In the absence of comprehensive federal AI regulation, US states have emerged as the primary arena for AI governance experimentation. The scale of state-level activity is remarkable: at least 48 states and Puerto Rico introduced over 1,000 bills including “AI” in the 2025 legislative season alone. This explosion of state legislation reflects both the urgency of AI governance challenges and the traditional role of states as “laboratories of democracy” in the American federal system.

Several states have enacted notable AI legislation. Colorado’s SB 24-205 established consumer protections for AI systems, creating an AI Impact Task Force to study implementation. California enacted multiple AI laws, including S.B. 942 requiring AI output marking and A.B. 2013 mandating training data transparency, though Governor Newsom vetoed S.B. 1047, which would have required safety testing for large-scale AI models. Washington’s ESSB 5838 created an AI task force to study governance questions and make recommendations to the legislature.

However, the proliferation of state AI laws has created significant challenges. Critics argue that a patchwork of different state regulations creates compliance nightmares for companies operating nationally, increases costs, and disproportionately impacts small and medium enterprises and startups that lack the resources to navigate 50 different regulatory frameworks. These concerns have fueled calls for federal preemption or at minimum federal baseline standards that would provide regulatory uniformity while allowing states to address specific local concerns.

The state-level landscape also reveals interesting partisan dynamics. While Democratic legislators introduced the majority of AI-related bills, Republican-led bills focused on liability protections and government use have achieved higher passage rates in several states. This bipartisan engagement suggests that AI governance transcends traditional political divisions, even as the specific policy priorities differ between parties. Understanding this dynamic is essential for predicting how federal AI regulation may eventually evolve, since Congressional action often follows successful state-level experimentation. For more insights on how AI impacts organizational productivity across sectors, explore this analysis of AI’s effect on firm productivity and employment.

Future of AI Governance and Policy Recommendations

The CRS report outlines several policy options for Congress as it navigates the complex terrain of AI regulation. These range from leveraging existing regulatory frameworks to creating entirely new authorities, supporting US AI development and deployment, and engaging more deeply with international regulatory efforts. Each option carries distinct implications for innovation, safety, and American competitiveness in the global AI landscape.

One critical consideration is whether existing laws — covering consumer protection, civil rights, product liability, and sector-specific regulation — can adequately address AI-related harms, or whether AI’s unique characteristics necessitate fundamentally new regulatory approaches. The House Task Force’s “AI issue novelty” principle provides a useful framework for this analysis, but reasonable people disagree about which AI challenges are genuinely novel and which are simply new manifestations of longstanding policy concerns.

The international dimension adds another layer of complexity. With 47 countries having adopted the OECD AI Principles, the G7 Hiroshima AI Process establishing voluntary guidelines, and the EU AI Act creating binding precedents, the United States faces pressure to engage constructively in global AI governance or risk having international standards set without American input. The Global Partnership on AI, now spanning 44 countries across six continents, represents one multilateral forum for this engagement, but the effectiveness of these international frameworks ultimately depends on domestic implementation.

The question of who should evaluate and audit AI systems remains unresolved. Proposals range from government oversight to industry self-assessment to independent third-party auditing. OpenAI’s CEO has proposed a licensing agency for advanced AI, though critics warn this could disadvantage startups and open-source developers. Others have called for a government-mandated professional AI auditing industry that could “deliver accountability without disincentivizing innovation.” The resolution of this debate will likely determine whether US AI regulation achieves meaningful oversight or remains largely aspirational.

What is clear is that the status quo — fragmented, voluntary, and largely reactive US AI regulation — faces growing pressure from multiple directions. As AI systems become more capable, more widely deployed, and more consequential in their impacts on individuals and society, the case for more structured governance becomes harder to dismiss. Whether that governance takes the form of comprehensive federal legislation, enhanced agency authorities, mandatory industry standards, or some combination of all three remains the central question of AI policy in the United States. For a deeper understanding of how research methodologies shape AI governance decisions, exploring the underlying analytical frameworks provides valuable context.

Stay ahead of AI regulation — transform policy documents into actionable intelligence with Libertify.

Start Now →

Frequently Asked Questions

What is the current status of US AI regulation at the federal level?

As of 2026, the United States has no comprehensive federal AI regulation. Fewer than 30 AI-related bills have been enacted since the 115th Congress, with most consisting of provisions embedded in appropriations or defense legislation. Federal efforts center on agency assessments, voluntary industry commitments, and exploration of whether existing regulatory authorities suffice for AI oversight.

How does US AI regulation compare to the EU AI Act?

The EU AI Act is the most comprehensive AI regulatory framework globally, using a risk-based classification system with penalties up to 7% of global revenue. The US takes a more sector-specific, voluntary approach emphasizing innovation and industry self-regulation, with no equivalent broad federal legislation. The UK follows a principles-based approach through existing regulators.

What are the three main approaches to regulating AI in the United States?

The CRS identifies three approaches: regulating AI technologies directly using technical thresholds like computing power (FLOPs), regulating AI use across all sectors through technology-neutral outcome-based rules, and regulating AI use within specific sectors such as healthcare, finance, or elections with tailored requirements.

How many AI use cases has the federal government reported?

As of January 2025, federal agencies reported over 1,990 current and planned AI use cases, with 337 identified as rights-impacting or safety-impacting. The top categories include mission-enabling internal agency support, health and medical applications, and government services for benefits and service delivery.

What role do states play in AI regulation when federal law is absent?

States have become primary AI regulators in the absence of federal legislation. At least 48 states introduced over 1,000 AI-related bills in 2025 alone. Notable examples include Colorado’s consumer protection AI Act, California’s multiple AI laws on transparency and training data, and various state task forces studying AI governance approaches.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup