State AI Policy: Why It Thrives in Some States and Fades in Others

📌 Key Takeaways

  • 385 AI bills tracked: Every US state introduced at least one AI-related bill between January 2023 and October 2025, but activity levels vary dramatically based on political and economic conditions.
  • Two-barrier model: States fail to legislate AI either because they lack fiscal and institutional capacity (material barrier) or because ideological resistance to regulation suppresses action (ideological barrier).
  • Political alignment dominates: Democrat-leaning states with younger, wealthier populations consistently produce the most AI legislation, while Republican-led states lag across all three bill categories.
  • Three bill categories: State AI legislation clusters around individual protection, information ecosystem transparency, and systemic governance — each with distinct adoption patterns.
  • Federal tension rising: Trump’s December 2025 executive order attempts to preempt state AI laws, but bipartisan Senate resistance (99-1 vote) signals states will continue fighting for regulatory autonomy.

Why State AI Policy Varies Across America

State AI policy has emerged as one of the most consequential and unevenly distributed areas of technology governance in the United States. While federal lawmakers debate broad principles and industry self-regulation, it is state legislatures that are building the actual regulatory frameworks governing how artificial intelligence affects citizens’ daily lives. A landmark Brookings Institution study published in January 2026 by researchers James S. Denford, Gregory S. Dawson, Kevin C. Desouza, and Marc E. B. Picavet reveals why some states are racing ahead with detailed AI governance frameworks while others have barely engaged with the issue.

The research analyzed all AI-related bills introduced across all 50 states from January 2023 through October 2025 — a total of 385 bills. Every state introduced at least one AI bill during this period, confirming that artificial intelligence regulation has reached universal awareness at the state level. However, the volume, scope, and ambition of these legislative efforts diverge dramatically based on a state’s political orientation, economic capacity, and demographic profile. Understanding these divergence patterns is essential for anyone tracking the evolution of AI governance requirements across American jurisdictions.

The findings carry immediate practical implications. For technology companies deploying AI systems across multiple states, this patchwork of regulation creates compliance complexity that will only intensify. For policymakers, the study provides an empirical framework for understanding why their neighbors may be legislating aggressively while their own state remains inactive. And for citizens, the research illuminates a growing governance gap where your protections against AI-driven harms depend heavily on your zip code rather than any coherent national standard.

Three Categories of State AI Legislation

The Brookings researchers grouped approximately 20 substantive categories of AI bills into three overarching themes that capture the full spectrum of state AI policy approaches. This classification system reveals distinct patterns in how states prioritize different aspects of artificial intelligence governance.

The first category — protection of the individual — encompasses legislation addressing algorithmic discrimination, biometric privacy, automated decision-making in employment and housing, and consumer protections against AI-driven manipulation. These bills focus on preventing direct harm to citizens from AI systems, particularly in high-stakes domains like criminal justice, healthcare, and financial services. States pursuing individual protection legislation tend to be responding to visible harms or constituent pressure around specific AI applications.

The second category — transparency and trust in information ecosystems — targets the integrity of public discourse in an AI-saturated media environment. Bills in this domain address deepfake regulation, chatbot disclosure requirements, AI-generated content labeling, and content provenance standards. This is the most technically complex and context-dependent category of state AI legislation, requiring both political will and administrative sophistication to implement effectively. As the National Conference of State Legislatures has documented, transparency bills are proliferating rapidly but with highly uneven implementation standards.

The third category — responsible systemic governance — involves the creation of institutional infrastructure for ongoing AI oversight. This includes establishing AI advisory councils, defining procurement standards for government AI systems, mandating algorithmic impact assessments, and creating reporting frameworks for AI deployment. Governance legislation represents the most forward-looking approach, building institutional capacity to manage AI risks over time rather than reacting to specific harms. States with strong governance legislation are essentially investing in their ability to regulate future AI developments, not just current ones.

How Demographics Shape AI Policy Adoption

One of the most striking findings in the Brookings research is the powerful role that state demographics play in determining AI legislative activity. The study used 2024 US Census Bureau data on population age structure, combined with income and poverty metrics from the Bureau of Economic Analysis, to identify the demographic conditions most closely associated with high and low AI bill production.

States with younger populations consistently appear in high-activity configurations for AI legislation. The researchers found that younger electorates are more technologically literate, more likely to encounter AI systems in their daily lives, and more receptive to government intervention in emerging technology domains. This demographic factor functions as an enabling condition — it does not guarantee legislative action on its own, but its absence reliably constrains it. States where the population skews older (with more than 55% of residents over a certain age threshold) show markedly lower AI legislative activity across all three bill categories.

Economic capacity proves equally decisive. States with higher per capita income possess the fiscal resources needed to design, implement, and enforce technology regulation. AI governance is not cost-free: it requires technical expertise in legislative drafting, regulatory infrastructure for enforcement, and administrative capacity for ongoing oversight. Wealthier states can afford to build these capabilities, while poorer states face a material barrier that prevents engagement even when political will exists. The combination of youth and wealth creates the strongest demographic foundation for AI policy innovation, a pattern visible in states like California, New York, and Washington that lead across all three legislative categories.

Poverty rates add an additional layer of complexity. The interaction between high income and high poverty (indicating significant income inequality) emerges as a specific driver of individual protection legislation. States with stark economic divides between wealthy technology workers and lower-income communities face more visible AI harms — algorithmic hiring discrimination, automated benefit denial, predictive policing — that generate constituent pressure for protective regulation. This explains why California and New York, despite their wealth, are among the most aggressive states on AI protection bills.

Explore the full Brookings analysis on state AI policy as an interactive experience — with key data points and legislative patterns visualized.

Try It Free →

Why Democrat-Led States Dominate AI Regulation

Political alignment emerges as the single strongest predictor of state AI policy activity in the Brookings study. Using qualitative comparative analysis (QCA), the researchers identified two primary configurations associated with high overall AI bill production. The first consists of Democrat-leaning states with younger populations. The second includes high per capita income states led by Democratic governors. Several states occupy both configurations simultaneously — New York, California, Illinois, Maryland, New Jersey, and Washington — creating an overlap zone of maximum legislative productivity.

The Democratic advantage in AI legislation operates through multiple channels. First, progressive political ideology is more receptive to government regulation of emerging technologies, viewing it as a necessary counterbalance to corporate power and a means of protecting vulnerable populations. Second, Democratic governors and legislative majorities provide the institutional machinery to move bills through committee, floor debate, and signing. Third, Democratic-leaning electorates in urban, technology-heavy regions generate both constituent demand for AI regulation and a talent pool of policy experts capable of drafting technically sound legislation.

Nevada and Virginia align specifically with the first configuration (Democrat-leaning, younger populations) but not the second, while Massachusetts and Connecticut align only with the second configuration (high income, Democratic governor). This differentiation reveals that the pathways to AI legislative activity are multiple — there is no single formula, but rather a set of sufficient conditions where either demographic readiness or fiscal-political capacity can trigger regulatory action. The common thread is that Democratic political alignment appears as a necessary condition in virtually every high-activity configuration identified by the research.

This finding carries significant implications for the trajectory of state AI policy. As long as AI regulation remains coded as a progressive policy priority, the partisan geography of AI governance will mirror other regulatory divides — environmental protection, digital privacy, and healthcare technology — where blue states lead and red states defer. Breaking this pattern would require either a shift in conservative attitudes toward technology regulation or federal intervention that compels nationwide minimum standards, a topic explored further in the analysis of high-risk AI fundamental rights frameworks.

The Material Barrier to State AI Governance

The Brookings study introduces a powerful two-barrier model that explains why state AI policy either thrives or fades. The first barrier is material: limited fiscal and institutional capacity prevents some states from acting even when they recognize AI risks and have political motivation to address them.

Material barriers manifest in several concrete ways. States with lower per capita income simply have fewer resources to allocate to emerging policy domains. Legislative staff may lack the technical expertise needed to draft AI-specific legislation that is both legally sound and technologically informed. Regulatory agencies may not have personnel trained in algorithmic auditing, AI impact assessment, or the technical standards required for meaningful enforcement. Without these institutional capabilities, even well-intentioned legislative proposals stall in committee or produce laws that are too vague to implement effectively.

The data reveals that material barriers create a specific pattern of low legislative activity: states with older populations and lower per capita income — regardless of political orientation — appear in low-activity configurations. This is particularly notable for governance legislation, which requires the most institutional capacity to implement. Establishing AI advisory councils, defining procurement standards, and mandating algorithmic impact assessments all demand sustained administrative investment that cash-strapped states cannot sustain.

The material barrier also explains an important asymmetry in the data. While wealth is not sufficient for high AI legislative activity (wealthy Republican states like Wyoming still show low activity), poverty is nearly sufficient for low activity. States on the lower end of the income spectrum consistently appear in low-activity configurations even when other conditions might favor regulation. This suggests that addressing the AI governance gap will require targeted investment in state-level regulatory capacity — technical assistance programs, model legislation clearinghouses, and shared enforcement infrastructure that can lower the cost of participation for resource-constrained states.

Interestingly, the material barrier interacts differently with each legislative category. Protection legislation, which often responds to specific constituent harms, can sometimes overcome material constraints through grassroots pressure. Transparency legislation, which requires sophisticated technical engagement, is the most capacity-constrained domain. Governance legislation falls in between: it demands institutional investment but can build capacity progressively through pilot programs and advisory bodies that grow over time.

The Ideological Barrier to AI Policy Progress

The second barrier identified in the Brookings two-barrier model is ideological: regulatory skepticism rooted in market-oriented political preferences constrains AI legislative action even in states that possess sufficient fiscal and institutional capacity. This barrier is arguably the more powerful of the two, because it prevents action as a matter of political choice rather than practical limitation.

The data is unambiguous on this point. Republican-leaning electorates and Republican governors consistently appear in low-activity configurations across all three AI bill categories. In the “All Bills Low” outcome, the first and strongest configuration consists simply of states with a Republican-leaning electorate. This single condition — independent of income, age, or poverty — is sufficient to predict low AI legislative activity. Conservative states stretching from Alaska to Florida, across the heartland, and throughout the South form a contiguous geography of AI regulatory inaction.

The ideological barrier operates through a set of interrelated political preferences. Conservative lawmakers tend to favor market self-regulation over government intervention, viewing AI regulation as potentially stifling innovation and economic competitiveness. They are more likely to frame AI policy as a federal rather than state responsibility, preferring to wait for national standards rather than risking regulatory overreach at the state level. They may also face industry lobbying pressure to avoid creating compliance burdens that would disadvantage businesses operating within their borders.

However, the Brookings findings suggest that ideological opposition to state AI policy is not monolithic. The study’s transparency analysis reveals that Democrat-leaning states with Republican governors sometimes produce high transparency bill activity — suggesting that divided government can create productive tension where moderate Republican executives engage with AI regulation under pressure from progressive legislatures. Virginia’s appearance in high-transparency configurations demonstrates that Republican governors in blue-leaning states may be more willing to engage with AI policy than their counterparts in solidly conservative jurisdictions.

The ideological barrier also has a compounding effect over time. States that do not engage with AI regulation early miss the opportunity to build institutional knowledge, develop regulatory expertise, and establish stakeholder relationships that would facilitate future action. As technology leaders accelerate AI deployment, the gap between regulated and unregulated states will widen, creating an increasingly difficult catch-up challenge for late movers.

Turn any policy research into an interactive experience your audience will actually engage with — no design skills required.

Get Started →

State AI Legislation on Individual Protection

Individual protection represents the most politically charged category of state AI policy. The Brookings analysis found only one high-activity configuration for protection legislation: Democrat-leaning states with high income and high poverty rates — a combination indicating significant income inequality. California and New York epitomize this pattern, combining progressive political will with the economic conditions that make AI harms most visible and politically salient.

Protection bills address the most tangible and immediate risks that AI systems pose to individuals. These include algorithmic discrimination in hiring, lending, and insurance decisions; biometric surveillance and facial recognition deployment by law enforcement; automated eligibility determinations for government benefits; and AI-driven content manipulation targeting vulnerable consumers. Each of these domains involves AI applications that directly affect citizens’ life outcomes, making them fertile ground for legislative action where political conditions support it.

The low-protection configurations tell an equally important story. Three distinct paths produce low individual protection activity: a Republican party base, Republican gubernatorial leadership, and lower income combined with higher poverty rates. These configurations show substantial geographic overlap, encompassing nearly the entire South (from Arizona to West Virginia and Florida, with Virginia as the lone exception), most of the West excluding Pacific Coast states, and the western Midwest stopping at Minnesota and Illinois.

The predominance of political factors in protection legislation suggests that the AI governance gap is fundamentally a reflection of political priorities rather than technical or fiscal limitations alone. States where protection bills are absent are not necessarily unaware of AI risks — they are choosing, through their elected representatives, not to address them through state regulation. This political choice has direct consequences for residents: a worker in Alabama facing algorithmic hiring discrimination has fundamentally different legal recourse than a worker in California facing the identical situation.

For organizations operating across state lines, this fragmented protection landscape creates both compliance challenges and ethical questions. Companies deploying AI hiring tools, credit scoring algorithms, or automated customer service systems must navigate a patchwork of requirements that varies not by the riskiness of their technology but by the political orientation of the jurisdictions where they operate. The White House Blueprint for an AI Bill of Rights attempted to establish voluntary national principles, but without legislative teeth, state-level protection bills remain the primary mechanism for enforceable AI accountability in the United States.

Transparency and Trust in AI Ecosystems

Transparency legislation presents the most complex and context-dependent pattern in the Brookings analysis. The study found only two pathways to high transparency bill activity, both with low coverage — meaning they represent narrow, highly specific conditions rather than broad trends. Both configurations involve Democrat-leaning states with Republican governors, though one features lower per capita income (Vermont and Nevada) and the other features higher poverty or greater income inequality (Virginia).

This narrow pathway structure suggests that transparency legislation — which includes bills addressing deepfakes, AI-generated content disclosure, chatbot identification requirements, and content provenance standards — depends more on policy entrepreneurship than structural conditions. Unlike protection bills (driven by visible harms) or governance bills (enabled by institutional capacity), transparency bills require a specific combination of political motivation and technical engagement that emerges only in particular institutional contexts.

The divided-government finding is particularly noteworthy. States where a Democrat-leaning electorate coexists with a Republican governor may produce transparency legislation precisely because the political tension creates space for compromise on information-integrity issues. Deepfakes and AI-generated misinformation are bipartisan concerns: Democrats worry about election interference and social manipulation, while Republicans are concerned about AI-powered censorship and content moderation overreach. This overlapping concern may create a narrow but real legislative opening in divided-government states.

Low transparency bill configurations mirror the broader pattern: Republican-dominated party bases, poorer Democratic-governor states with limited administrative capacity, and high-income states with Republican governors all produce low transparency activity. The capacity constraint is particularly acute for transparency legislation because these bills require sophisticated technical understanding of how AI-generated content propagates through digital ecosystems. Drafting effective deepfake legislation, for instance, requires familiarity with generative AI capabilities, digital forensics standards, and platform distribution mechanisms — expertise that is scarce in many state legislatures. The National Institute of Standards and Technology has published technical frameworks that could support state transparency efforts, but translating federal technical standards into enforceable state law requires dedicated policy staff that many states lack.

Responsible AI Governance at the State Level

Governance legislation — the creation of institutional infrastructure for ongoing AI oversight — shows two moderate-coverage pathways to high activity. Both require high-income states, confirming that systemic governance demands fiscal capacity as a prerequisite. The first pathway adds high poverty (income inequality) as a condition, producing activity in states like Colorado, Illinois, New York, and Washington. The second pathway substitutes a Democratic governor, producing activity in California and Virginia.

These governance-focused states are building the institutional architecture that will determine how effectively American government manages AI over the coming decades. Their legislative efforts include establishing AI advisory councils with technical expertise, defining procurement standards that require algorithmic impact assessments before government agencies can purchase AI systems, creating reporting frameworks for public-sector AI deployment, and funding research on AI risks and opportunities specific to their state context.

The governance domain most clearly illustrates the enabling role of fiscal capacity. Unlike protection legislation (which can emerge from grassroots pressure) or transparency legislation (which can rely on policy entrepreneurship), governance legislation requires sustained institutional investment that only wealthier states can afford. Creating an AI advisory council requires funding for expert appointments, staff support, and ongoing research. Defining procurement standards requires technical review capacity within state agencies. Mandating impact assessments requires trained auditors and enforcement mechanisms. Each of these institutional building blocks has ongoing operational costs that make governance the most resource-intensive category of AI legislation.

The low-governance configurations are particularly revealing. Four distinct pathways produce low governance activity: deep-red Republican states (ideological opposition), older and lower-income states, older states with higher poverty, and older states with Republican governors. The common thread across three of four configurations is an aging population, suggesting that demographic structure acts as a particularly strong brake on governance legislation. Older populations may be less engaged with AI policy issues, less likely to generate constituent pressure for technology governance, and more resistant to government expansion into new regulatory domains.

For policymakers in low-governance states seeking to build AI oversight capacity, the Brookings findings suggest a pragmatic approach: start with low-cost, high-visibility initiatives (like joining multi-state AI governance compacts or adopting model legislation from leading states) before attempting to build full institutional infrastructure. The Bureau of Economic Analysis data on state income capacity can help identify which investment strategies are realistic given each state’s fiscal position.

Federal Preemption and State AI Policy Conflicts

The Brookings analysis takes on added urgency in light of President Trump’s executive order signed December 11, 2025, which seeks to preempt state-level AI laws, promote US dominance in artificial intelligence, and consolidate regulatory authority at the federal level. The order directs the US attorney general to challenge state laws deemed to impede AI leadership and authorizes withholding federal infrastructure funds from states that fail to comply. In effect, the executive order attempts to resolve the patchwork problem by suppressing state-level regulation rather than harmonizing it.

This federal preemption effort reflects the same ideological posture that the Brookings researchers observed in many Republican-led states: a preference for minimal government intervention in AI development, prioritizing innovation speed over regulatory caution. The executive order frames state AI regulation as a potential drag on American competitiveness, particularly in the global race against China and the European Union, both of which are pursuing their own comprehensive AI governance frameworks.

However, the political landscape for federal preemption is far more contested than the executive order suggests. In July 2025, the US Senate defeated a bill that would have imposed a 10-year moratorium on state AI regulation by a remarkable 99-1 vote. This near-unanimous rejection reveals that senators and governors across both parties — even those who are ideologically skeptical of state-level AI regulation — are unwilling to surrender state legislative autonomy over technology governance. The Tenth Amendment tradition of state police power over consumer protection, public safety, and commerce regulation creates constitutional and political resistance to sweeping federal preemption.

The tension between federal preemption and state AI policy activism will likely intensify as AI applications proliferate into every domain of American life. States like California and New York that have invested heavily in AI governance infrastructure are unlikely to dismantle their regulatory frameworks voluntarily. Meanwhile, the executive order’s enforcement mechanisms — attorney general challenges and funding withholding — face legal uncertainty and political backlash. The most probable outcome is a contested middle ground where federal principles coexist with state-level implementation, similar to the model that has emerged in environmental regulation, data privacy (the California Consumer Privacy Act serving as a de facto national standard), and financial services oversight.

For organizations tracking AI policy developments, this analysis underscores the importance of monitoring state-level legislative activity alongside federal pronouncements. The Brookings two-barrier model provides a practical tool for predicting which states are most likely to introduce new AI legislation and which types of bills they will prioritize. States that combine fiscal capacity with progressive political alignment — the leading indicators identified in this research — will continue to drive the frontier of American AI governance regardless of federal preemption attempts.

Transform complex policy research into engaging interactive experiences that drive real understanding and action.

Start Now →

Frequently Asked Questions

Why do some states have more AI legislation than others?

Brookings research identifies two key barriers. States with younger populations, higher per capita income, and Democratic political leadership tend to produce significantly more AI legislation. Conversely, states with older demographics, lower income, and Republican governance lag behind due to material constraints and ideological resistance to technology regulation.

What are the three types of state AI bills?

State AI legislation falls into three categories: protection of the individual (addressing algorithmic discrimination, biometric privacy, and automated decision-making), transparency and trust in information ecosystems (tackling deepfakes, chatbot disclosure, and content provenance), and responsible systemic governance (establishing AI oversight bodies, procurement standards, and impact assessments).

How does political party affiliation affect state AI policy?

Political alignment is the strongest predictor of AI legislative activity. Democrat-led states with Democratic-leaning electorates consistently produce more AI bills across all three categories. Republican-led states show lower activity primarily due to ideological opposition to government regulation of emerging technology, not lack of awareness.

What is the two-barrier model for state AI governance?

The two-barrier model identifies material barriers (limited fiscal and institutional capacity preventing action even when risks are recognized) and ideological barriers (regulatory skepticism rooted in market-oriented political preferences constraining action even where capacity exists). States must overcome both barriers to achieve comprehensive AI governance.

How does federal AI policy affect state-level AI regulation?

President Trump’s December 2025 executive order seeks to preempt state-level AI laws and consolidate AI authority at the federal level. The order directs the attorney general to challenge state laws deemed to impede AI leadership and may withhold infrastructure funds from non-compliant states. However, a Senate vote of 99-1 against a 10-year moratorium on state AI regulation shows bipartisan resistance to federal preemption.

Which states lead in AI policy adoption?

New York, California, Illinois, Maryland, New Jersey, and Washington lead across all AI policy categories. These states combine younger populations, strong fiscal capacity, and Democratic political control. Nevada and Virginia also rank highly for overall bill activity, while Massachusetts and Connecticut lead among high-income states with Democratic governors.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup

Our SaaS platform, AI Ready Media, transforms complex documents and information into engaging video storytelling to broaden reach and deepen engagement. We spotlight overlooked and unread important documents. All interactions seamlessly integrate with your CRM software.