×
States create AI rules as federal regulation stalls. Here are their 4 priorities.
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

While Congress remains largely silent on artificial intelligence regulation, state governments across America are stepping into the void with unprecedented legislative activity. All 50 states introduced AI-related legislation in 2025, creating a complex regulatory landscape that businesses must now navigate.

This surge in state-level action follows Congress’s recent defeat of a proposed moratorium on state AI regulation, effectively giving states the green light to continue crafting their own rules. The result is a patchwork of regulations that, while complicating compliance efforts for AI developers, addresses critical gaps in privacy protection, civil rights, and consumer safeguards that federal lawmakers have yet to tackle.

Four key areas dominate state AI regulation efforts, each presenting distinct challenges and opportunities for businesses operating in the rapidly evolving artificial intelligence landscape.

1. Government use of artificial intelligence

State governments increasingly rely on predictive AI—systems that analyze historical data to forecast future outcomes—for everything from determining social services eligibility to making criminal justice recommendations. However, this widespread adoption of algorithmic decision-making carries significant hidden costs, particularly around fairness and accountability.

The primary concern centers on algorithmic bias, where AI systems systematically discriminate against certain groups based on race, gender, or other characteristics. When these biased systems influence government decisions about healthcare, housing, or criminal justice, the consequences can be severe and far-reaching.

Colorado’s Artificial Intelligence Act exemplifies the state response, requiring both AI developers and government agencies to disclose risks posed by systems involved in consequential decisions. The law mandates transparency about how these systems work and what safeguards exist to prevent discriminatory outcomes.

Montana took a different approach with its “Right to Compute” law, focusing on critical infrastructure protection. The legislation requires AI developers to adopt comprehensive risk management frameworks—structured approaches to identifying and addressing security and privacy vulnerabilities throughout the development process.

Several states have established dedicated oversight bodies to monitor government AI use. New York’s SB 8755 bill creates regulatory authorities specifically tasked with ensuring responsible AI deployment across state agencies, providing ongoing supervision rather than just one-time compliance requirements.

2. Healthcare artificial intelligence regulation

Healthcare represents the most active area of state AI regulation, with 34 states introducing over 250 AI-related health bills in the first half of 2025 alone. This legislative activity reflects both the tremendous potential and significant risks of AI in medical settings.

State healthcare AI regulations generally fall into four distinct categories, each addressing different aspects of the healthcare ecosystem:

Disclosure requirements mandate that AI system developers and healthcare organizations clearly inform patients when artificial intelligence influences their care. These laws ensure patients understand how AI affects diagnosis, treatment recommendations, or insurance decisions.

Consumer protection measures focus on preventing unfair discrimination and ensuring patients can challenge AI-driven medical decisions. For instance, if an AI system recommends against a particular treatment, patients must have clear pathways to contest that recommendation through human review.

Insurance oversight regulations govern how health insurers use AI to make coverage decisions. These rules prevent insurers from using AI systems to systematically deny claims or discriminate against patients with certain conditions, while requiring transparency in automated decision-making processes.

Clinical use standards regulate how healthcare providers deploy AI for patient diagnosis and treatment. These regulations ensure that AI systems meet appropriate safety and efficacy standards before being used in clinical settings, similar to how medical devices are regulated.

3. Facial recognition and surveillance technology

Facial recognition technology presents particularly complex challenges for state regulators, intersecting privacy rights, civil liberties, and public safety concerns. The technology’s documented biases against people of color have made it a lightning rod for legislative action.

Groundbreaking research by computer scientists Joy Buolamwini and Timnit Gebru revealed that facial recognition systems consistently perform worse on darker skin tones, creating systematic disadvantages for Black individuals and other minorities. These biases stem from both the training data used to develop the systems and the lack of diversity among development teams.

The implications extend beyond technical accuracy. When law enforcement agencies use biased facial recognition systems for predictive policing—deploying officers based on algorithmic predictions of where crimes might occur—the result can perpetuate and amplify existing inequalities in the criminal justice system.

By the end of 2024, 15 states had enacted laws limiting facial recognition harms. These regulations typically require vendors to publish bias test reports demonstrating their systems’ performance across different demographic groups. They also mandate robust data management practices and require human review before taking action based on facial recognition results.

Some states have gone further, restricting or banning facial recognition use in specific contexts like schools or public housing, recognizing that the technology’s current limitations make it inappropriate for certain applications.

4. Generative AI and foundation model oversight

The explosion of generative AI tools like ChatGPT has prompted states to address transparency and accountability in AI development, particularly around the massive datasets used to train these systems.

Foundation models—AI systems trained on enormous datasets that can be adapted for multiple tasks without additional training—represent a particular regulatory challenge. Companies like OpenAI, Google, and Anthropic have been notoriously secretive about their training data, making it difficult for copyright holders to know if their content was used without permission.

Utah’s Artificial Intelligence Policy Act initially required broad disclosure when generative AI systems interact with individuals, but lawmakers later narrowed the scope to situations involving advice-giving or sensitive information collection. This evolution reflects the ongoing challenge of crafting workable regulations for rapidly evolving technology.

California’s AB 2013 takes a more direct approach, requiring developers to publish detailed information about their training data on their websites. This transparency requirement helps copyright owners understand whether their content was used to train AI systems, potentially enabling legal challenges to unauthorized use.

These regulations reflect growing recognition that the current “black box” approach to AI development, where training processes remain largely opaque, is incompatible with accountability and fair use principles.

Federal response and future implications

The Trump administration’s AI Action Plan, announced in July 2025, introduces a new dynamic to state regulation efforts. The plan explicitly states that federal AI funding should not flow to states with “burdensome” AI regulations, potentially forcing states to choose between federal support and robust consumer protections.

This federal stance could significantly constrain state regulatory efforts, particularly for states that depend on federal funding for AI research initiatives or technology infrastructure projects. The administration’s definition of “burdensome” remains unclear, creating uncertainty for state lawmakers crafting new regulations.

Despite these federal pressures, state-level regulation appears likely to continue expanding. The regulatory patchwork, while creating compliance challenges for businesses, addresses real gaps in consumer protection, civil rights enforcement, and privacy safeguards that federal lawmakers have been unable or unwilling to tackle.

For businesses operating AI systems across multiple states, this regulatory landscape requires careful navigation and robust compliance frameworks. However, these state efforts also provide valuable testing grounds for regulatory approaches that could eventually inform more comprehensive federal legislation.

The current state-by-state approach may be imperfect, but it represents a pragmatic response to the urgent need for AI governance in an era of rapid technological advancement and federal legislative gridlock.

States take the lead in AI regulation as federal government steers clear

Recent News

Creative freelancers see 25% job surge countertrend as businesses ditch AI content

Platforms now penalize AI-generated content as human creativity makes a comeback.

Breakthru Beverage CIO targets $700M in AI-powered e-commerce revenue

The $8.6 billion distributor prioritizes education and infrastructure before chasing AI hype.