State AI Legislation 2025: Trends and Policy Analysis

📌 Key Takeaways

  • 210 Bills Across 42 States: The 2025 legislative session saw unprecedented AI legislative activity, though only about 9% of tracked bills were ultimately enrolled or enacted into law.
  • Transparency Over Mandates: States shifted away from sweeping high-risk AI frameworks toward narrower, transparency-driven approaches, with eight laws requiring AI disclosure when individuals interact with or are subject to AI decisions.
  • Healthcare AI Takes Center Stage: Nearly 9% of all AI bills focused on healthcare, with four states enacting laws prohibiting AI from independently diagnosing patients or replacing licensed professionals.
  • Chatbot Regulation Emerges: Five chatbot-specific bills were enrolled or enacted in 2025 compared to zero in 2024, driven by lawsuits involving minors and companion AI platforms like Character.AI.
  • Innovation-Friendly Mechanisms: States pioneered regulatory sandboxes (Texas, Delaware, Utah) and affirmative defenses for AI companies maintaining governance frameworks, balancing consumer protection with growth support.

State AI Legislation Landscape in 2025

The 2025 state legislative session marked a watershed moment in American AI governance. With the federal government continuing to rely on voluntary frameworks and executive guidance rather than comprehensive legislation, state legislatures stepped into the regulatory vacuum with unprecedented energy. The Future of Privacy Forum (FPF) report documents 210 AI-related bills introduced across 42 states that could directly or indirectly affect private-sector AI development and deployment — and only eight states did not introduce any bills meeting FPF’s analytical threshold.

This surge in state AI legislation reflects a fundamental reality of American governance: when federal action stalls, states become the primary laboratories for policy experimentation. The volume is staggering — other legislative trackers estimate over 1,000 total AI-related bills were introduced when using broader definitions — but the passage rate tells an equally important story. Only about 9% of the bills FPF tracked were enrolled or enacted, resulting in 11 bills signed into law and 9 awaiting executive action for provisions directly affecting the private sector.

What makes the 2025 legislative landscape particularly significant is not just its scale but its sophistication. Unlike the initial wave of AI bills that often proposed broad, comprehensive regulatory frameworks modeled loosely on the EU AI Act, the 2025 session saw legislatures pivot toward more targeted, pragmatic approaches. This evolution reflects hard-won lessons from 2024, when ambitious comprehensive bills like Colorado’s AI Act and California’s S.B. 1047 either faced implementation challenges or outright vetoes, and suggests a maturing understanding of what AI regulation can realistically accomplish at the state level.

The FPF report classifies bills into four thematic categories that illuminate the strategic priorities of state legislatures: use and context-specific bills targeting AI in high-risk decisions, technology-specific bills addressing generative AI and frontier models, liability and accountability bills defining legal responsibility for AI systems, and government use and strategy bills with downstream private-sector effects. This taxonomy reveals a legislative ecosystem that is both diverse and increasingly strategic in its approach to AI governance.

Shift From Broad Frameworks to Targeted AI Laws

Perhaps the most significant trend in 2025 state AI legislation was the decisive move away from sweeping, comprehensive AI frameworks toward narrower, more targeted regulatory approaches. No new standalone “high-risk” or automated decision-making technology (ADMT) frameworks were enacted in 2025 — a stark contrast to the ambitions of the previous legislative cycle when Colorado’s 2024 AI Act attempted to establish broad obligations for developers and deployers of high-risk AI systems.

Where substantive obligations did emerge, they came through amendments to existing laws rather than new standalone legislation. California’s CPPA continued developing regulations on automated decision-making technology through its ongoing rulemaking process. Connecticut’s SB 1295 updated the state’s comprehensive data privacy law to address AI-specific concerns. New Jersey advanced similar rulemaking, and Utah’s SB 226 amended its existing generative AI law to focus specifically on “high-risk” consumer-facing interactions involving financial, legal, or medical decisions.

This evolutionary approach represents a pragmatic recognition that AI regulation is most effective when grafted onto existing legal frameworks rather than constructed from scratch. Connecticut’s SB 2, which began as a broad “high-risk” AI framework modeled on the Colorado approach, was progressively pared back through the legislative process until it focused primarily on transparency requirements. It passed the Senate in this narrower form but ultimately was not enacted, illustrating the political difficulty of comprehensive AI legislation even in states sympathetic to regulation.

Transparency and disclosure emerged as the dominant regulatory tool across the legislative landscape. Eight enrolled or enacted laws now require individuals to be informed when they are interacting with or subject to AI-driven decisions. These disclosure requirements range from straightforward notices in narrowly targeted laws to detailed substantive disclosures about the purpose, scope, and data inputs of AI systems in broader frameworks. Consumer and labor advocates argue that making AI use visible equips individuals to exercise their existing rights under civil rights, consumer protection, and privacy laws — creating a “new notice and choice regime” that parallels the foundational approach of data privacy regulation. Understanding how these regulatory shifts interact with AI’s impact on critical thinking provides important context for evaluating their potential effectiveness.

Healthcare AI Regulation Across States

Healthcare emerged as one of the most active domains for state AI legislation in 2025, with nearly 9% of all introduced AI-related bills focusing specifically on healthcare applications. Four states enacted healthcare-specific AI laws, with two additional broader frameworks also applying to healthcare settings. The central concern driving this legislative activity is clear: as AI systems become capable of diagnosing diseases, recommending treatments, and even conducting therapy sessions, the question of where human professional judgment must remain paramount becomes both urgent and deeply personal.

Illinois HB 1806 exemplifies the most detailed approach to healthcare AI regulation. The law bars licensed therapy professionals from using AI beyond “supplementary support” functions, requires patient consent before any AI involvement in treatment, and establishes a general prohibition against AI therapy services operating without a licensed professional’s oversight. This legislation reflects a fundamental policy judgment that mental health treatment requires human connection and professional accountability that AI systems cannot provide — at least not yet.

The common requirements across healthcare AI laws reveal a consistent legislative philosophy. States are not banning AI from healthcare settings entirely; instead, they are establishing clear boundaries around what AI can and cannot do independently. Prohibitions typically focus on AI independently diagnosing patients, making treatment decisions without human review, or replacing licensed professionals in patient-facing roles. Disclosure obligations ensure patients know when AI is being used in their care, preserving informed consent as a foundational principle even as the technologies involved become more complex.

Nevada’s AB 406 takes a complementary approach by prohibiting unlicensed AI systems from providing mental healthcare while also restricting how licensed providers can use AI tools. Texas SB 1188 requires healthcare practitioners to review information obtained through AI before acting on it, ensuring that AI serves as a decision-support tool rather than a decision-maker. These varied approaches share a common thread: the recognition that healthcare AI regulation must protect patients while preserving the potential benefits that AI can bring to clinical practice, research, and administrative efficiency. For a broader perspective on how policy frameworks address complex technological transitions, examining parallel regulatory challenges in other sectors offers valuable lessons.

Transform complex legislative reports into interactive experiences your policy team will actually use.

Try It Free →

Chatbot and Companion AI Legislation

The explosion of chatbot-specific legislation in 2025 represents one of the most dramatic shifts in the state AI regulatory landscape. Five chatbot-specific bills were enrolled or enacted in 2025, compared to zero signed into law in 2024. This legislative acceleration was driven by a convergence of factors: multiple lawsuits involving minors and companion chatbot platforms like Character.AI and Snapchat’s MyAI, state attorney general complaints in Utah and Florida, and growing public concern about the psychological impact of AI companions on young users.

Disclosure requirements form the cornerstone of chatbot regulation across states. The underlying principle is straightforward: users should know when they are interacting with an AI system rather than a human being. California’s SB 243 requires reminders every three hours for minor users that they are interacting with AI. New York’s S-3008C and S 5668 mandate notification at the beginning of conversations and at three-hour intervals. Utah’s SB 452 requires disclosures before users access chatbot services and whenever a user asks whether AI is being used in the conversation.

Beyond basic disclosure, several states have enacted safety protocols specifically targeting suicide risk and self-harm. New York’s S-3008C prohibits offering AI companion services without making “reasonable efforts” to detect suicidal ideation and direct users to crisis resources. This requirement reflects the painful reality that companion chatbots, particularly those designed to provide emotional support, may become primary points of contact for individuals in mental health crises — and that AI systems must be equipped to respond appropriately to these situations rather than continuing conversational patterns that could prove harmful.

Utah’s SB 452 introduces additional accountability measures, including a prohibition on product promotion during mental health chatbot conversations unless clearly labeled as advertising. This provision addresses concerns that chatbot platforms might exploit vulnerable users’ emotional states for commercial purposes. The law also creates an affirmative defense for chatbot suppliers who maintain governance policies — a mechanism designed to incentivize responsible development practices while providing legal certainty for companies that invest in safety measures.

The definitional landscape for chatbot regulation remains fragmented, however, with states using a wide variety of terms including “AI companion,” “companion chatbot,” “mental health chatbot,” “artificial intelligence chatbot,” and simply “bot.” This terminological diversity creates potential compliance challenges for companies operating across multiple states and suggests that definitional harmonization may be needed as chatbot regulation matures.

Frontier Model and Generative AI Rules

The regulation of frontier AI models — the most powerful and potentially risky AI systems — represents the cutting edge of state AI legislation. Two landmark bills advanced in 2025, both centered on preventing “catastrophic risks” from the most capable AI systems. California’s Transparency in Frontier AI Act (SB 53) and New York’s RAISE Act (S 6453) share a common approach: establishing regulatory obligations triggered by technical thresholds related to computational power and development costs.

California’s SB 53 regulates “frontier developers” whose models meet two criteria: training compute exceeding 10^26 integer operations and annual gross revenues exceeding $500 million. The law requires these developers to maintain written safety and security protocols, publish annual public transparency reports, and implement whistleblower protections for employees who report safety concerns. New York’s RAISE Act applies to “large developers” using a similar compute threshold of 10^26 integer operations combined with a compute cost threshold exceeding $100 million.

These 2025 bills represent a significant evolution from their 2024 predecessors. California’s S.B. 1047, which Governor Newsom vetoed in 2024, included more aggressive requirements such as third-party audit mandates and “full model shutdown” capabilities. The 2025 versions streamline requirements and focus on transparency and internal governance rather than external enforcement mechanisms. Similar frontier model bills were introduced in Rhode Island, Michigan, and Illinois, suggesting that this regulatory approach is gaining traction beyond its initial proponents.

Generative AI content labeling has also progressed substantially. The majority of generative AI bills focus on two complementary approaches: user-facing disclosures that inform individuals when they encounter AI-generated content, and technical provenance tools such as watermarking systems that embed traceability metadata within AI-generated outputs. California’s AB 853 addresses provenance data and watermarking requirements, while several states mandate consumer warnings about potential inaccuracies in AI-generated content. New York’s S 934, for example, requires “clear and conspicuous” notices warning users about possible inaccuracies in AI outputs.

AI Liability Frameworks and Accountability

One of the most consequential developments in 2025 state AI legislation is the emergence of sophisticated liability frameworks that attempt to balance accountability with innovation incentives. The volume and complexity of affirmative defenses and rebuttable presumptions increased markedly compared to 2024, reflecting legislative efforts to create legal certainty for AI developers and deployers who invest in responsible governance practices.

Texas’s TRAIGA (HB 149) represents perhaps the most comprehensive state liability framework for AI. The law provides an affirmative defense for entities that cure violations of AI governance requirements and comply with recognized AI risk management frameworks, such as the NIST AI Risk Management Framework. It also establishes a rebuttable presumption of reasonable care for entities that demonstrate compliance with these frameworks, effectively rewarding companies that adopt industry-standard governance practices with enhanced legal protections.

California’s SB 813 introduces a novel approach by allowing certified third-party audits to serve as an affirmative defense in AI-related legal proceedings. This mechanism creates market incentives for independent AI auditing services while giving companies a concrete, achievable path to legal protection. Arkansas’s HB 1876 addresses a different dimension of AI liability entirely, granting content ownership rights to individuals who provide input to generative AI tools — a legislative response to the growing debate about intellectual property rights in the age of AI-generated content.

California’s AB 316 takes the opposite approach for certain scenarios, clarifying that developers have no defense in tort claims when AI systems “autonomously” cause harm. This provision establishes that the creation of highly autonomous AI systems does not eliminate the developer’s legal responsibility for the outcomes those systems produce — an important precedent as AI agents become increasingly capable of independent action.

Connecticut’s SB 1295 demonstrates how liability frameworks are being integrated into existing legal structures rather than created from scratch, updating the state’s comprehensive data privacy law to address AI-specific accountability concerns. This integration approach has the advantage of leveraging established enforcement mechanisms and legal precedents rather than creating entirely new regulatory infrastructure.

Make AI policy analysis accessible — turn dense legislative reports into engaging interactive content.

Get Started →

Innovation Sandboxes and Right-to-Compute Laws

Not all state AI legislation is restrictive. A significant strand of 2025 legislation focuses on actively supporting AI innovation through regulatory sandboxes, liability protections, and affirmative rights to develop and use AI technology. These innovation-friendly measures reflect a bipartisan recognition that state-level AI regulation must support economic growth alongside consumer protection.

Regulatory sandboxes — controlled environments where companies can test AI innovations under relaxed regulatory requirements with government oversight — gained significant momentum in 2025. Texas and Delaware enacted new sandbox provisions, while Utah established the first official sandbox agreement under its pioneering 2024 AI Policy Act. Delaware’s H.J.R. 7 is particularly notable as the country’s first regulatory sandbox dedicated specifically to agentic AI, the emerging category of AI systems capable of autonomously understanding, planning, and executing complex tasks.

Montana’s SB 212 takes innovation protection further with a “right to compute” provision that prohibits the state from restricting AI use or development without demonstrating a “compelling government interest.” This language, borrowed from constitutional law’s strict scrutiny framework, establishes an exceptionally high bar for state interference with AI development. The same law includes risk management requirements for AI used in critical infrastructure but allows compliance with federal requirements to satisfy state obligations, reducing the compliance burden for companies already meeting federal standards.

The emergence of these innovation-focused measures reflects the influence of the White House’s America’s AI Action Plan, released in July 2025, which specifically recommended regulatory sandboxes and streamlining bureaucratic hurdles for AI development. State legislatures have drawn direct inspiration from this federal guidance, creating an unusual dynamic where federal policy recommendations are being implemented through state legislation rather than federal law. Understanding how these innovation frameworks interact with broader research on AI’s impact on firm productivity helps evaluate their potential economic effects.

Partisan Dynamics and Enforcement Trends

The partisan dynamics of state AI legislation reveal a more nuanced picture than might be expected in America’s polarized political environment. Democratic legislators introduced more than 75% of all AI-related bills in 2025, yet nearly 41% of AI bills signed into law were introduced by Republicans. Before California’s large wave of enrolled bills shifted the numbers, closer to 75% of enacted AI measures had been Republican-led.

This apparent paradox reflects fundamentally different regulatory priorities across party lines. Republican-led bills tended to focus on liability protections, government use requirements, and innovation-friendly mechanisms, while Democrat-led bills prioritized transparency obligations and consumer protections. Paradoxically, several Republican-led states adopted elements reminiscent of EU AI Act “prohibited AI practices” within their government-use laws — Texas’s HB 149 and Montana’s HB 178 both include provisions restricting certain government AI applications that mirror EU prohibitions on social scoring and certain biometric identification uses.

Colorado’s definitional framework, which focuses on “high-risk” AI systems and “consequential decisions,” proved quietly influential across party lines despite the political challenges the original Colorado AI Act faced. The framework’s terminology and analytical structure appeared in bills across the political spectrum, suggesting that while the regulatory ambition of comprehensive frameworks may be politically untenable, the conceptual tools they develop can reshape AI governance discourse more broadly.

On the enforcement front, 2025 saw significant expansion of state attorney general investigative powers related to AI. Texas’s TRAIGA and Virginia’s HB 2094 (which was ultimately vetoed) both provided civil investigative demand (CID) powers to state attorneys general. Under TRAIGA, the AG can demand broad information including data sources, model development processes, and safeguards — even when these items are not specifically required under the law’s substantive provisions. This creates a powerful investigative tool that extends well beyond the law’s explicit regulatory requirements.

Whistleblower protections represent another important enforcement trend. California’s SB 53 protects employees at frontier model laboratories who report safety “critical risks,” while New York’s S 1169 prevents companies from restricting employees’ ability to disclose violations of high-risk AI testing and transparency requirements. These provisions recognize that effective AI governance often depends on information from inside AI development organizations that might not otherwise become public.

2026 Outlook: Agentic AI and Algorithmic Pricing

As the 2025 legislative session closes, several emerging issues are already shaping the agenda for 2026 state AI legislation. The FPF report identifies three forward-looking challenges that will likely dominate the next legislative cycle: definitional uncertainty, agentic AI governance, and algorithmic pricing regulation.

Definitional challenges remain persistent across every category of AI legislation. While most state bills build on the OECD baseline definition, state-specific adaptations create a fragmented landscape. Frontier and foundation model definitions vary between states — California and New York use different cost thresholds, for example — and generative AI definitions diverge on technical qualifiers. The chatbot space is the most fragmented, with states using at least five different terms to describe essentially similar technologies. This definitional diversity complicates compliance for companies operating across multiple jurisdictions and may ultimately require either federal harmonization or interstate compacts.

Agentic AI — systems capable of autonomously understanding, planning, and executing complex tasks across multiple steps — represents the most significant governance challenge on the horizon. Delaware’s dedicated agentic AI sandbox signals legislative awareness that these systems pose fundamentally different challenges than conversational AI or analytical tools. When an AI agent can independently browse the internet, make purchases, send communications, and execute transactions, existing regulatory frameworks designed for tools that require human initiation of each action become inadequate. The question of liability when an AI agent causes harm through a chain of autonomous decisions is one that current legislation has barely begun to address.

Algorithmic pricing has emerged as a new frontier in AI regulation, driven by concerns that AI systems are being used to set personalized prices based on individual consumer data or to coordinate pricing among competitors. New York’s S 3008 requires disclosure of “personalized algorithmic pricing,” while California bills would prohibit pricing based on personal information from surveillance technology (AB 446) and ban “price-setting algorithms” used by competitors sharing nonpublic data (SB 384). Similar measures were introduced in Colorado and Minnesota, suggesting this issue will gain legislative momentum in 2026.

The broader trajectory of state AI legislation points toward a continuing evolution from broad, aspirational frameworks to targeted, enforceable rules addressing specific AI applications and risks. This maturation process, while sometimes frustrating for advocates of comprehensive regulation, reflects the practical realities of legislating in a rapidly evolving technological environment. The states that prove most successful at balancing innovation support with meaningful consumer protection will likely serve as models for eventual federal legislation — continuing the American tradition of state-level policy experimentation that has shaped governance in areas from environmental protection to data privacy. For those tracking how LLM agent technologies are evolving, the regulatory landscape described here will be essential context for understanding the governance frameworks these systems will face.

Stay ahead of state AI regulation — transform policy documents into actionable interactive intelligence.

Start Now →

Frequently Asked Questions

How many state AI bills were introduced in 2025?

In 2025, 210 AI-related bills were introduced across 42 states that could directly or indirectly affect private-sector AI development and deployment. Other trackers estimate over 1,000 total AI-related bills when including broader definitions. Only about 9% of tracked bills were enrolled or enacted into law.

What are the main categories of state AI legislation?

State AI legislation falls into four main categories: use and context-specific bills targeting AI in high-risk decisions like healthcare and employment, technology-specific bills targeting generative AI and frontier models, liability and accountability bills defining legal responsibility for AI systems, and government use and strategy bills setting requirements for state agency AI adoption.

Which states are leading in AI regulation?

Texas enacted TRAIGA (HB 149), one of the most comprehensive state AI frameworks with affirmative defenses and AG investigative powers. California enrolled multiple AI bills including frontier model regulation (SB 53) and chatbot rules. Utah pioneered regulatory sandboxes and chatbot accountability frameworks. New York enacted companion chatbot regulation and algorithmic pricing disclosure.

How do state AI laws address healthcare AI?

Healthcare AI bills comprised nearly 9% of all state AI legislation in 2025. Four states enacted healthcare-specific AI laws focusing on limiting AI use by licensed professionals, especially in mental health. Common requirements include prohibiting AI from independently diagnosing patients, mandating disclosure when AI is used in patient communications, and requiring human oversight of AI-assisted treatment decisions.

What is an AI regulatory sandbox and which states have them?

An AI regulatory sandbox is a controlled environment where companies can test AI innovations under relaxed regulatory requirements with government oversight. Texas and Delaware enacted new sandbox provisions in 2025, while Utah established the first official sandbox agreement under its 2024 AI Policy Act. These sandboxes aim to balance innovation support with consumer protection.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup