×
AI legislation and concern is advancing at the state level as DC leans right in
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

State-level AI regulation is accelerating rapidly in the absence of federal action, with nearly 700 bills introduced in 2024 alone. This legislative surge reflects growing concerns about AI risks and consumer protections, ranging from comprehensive frameworks to targeted measures addressing specific harms like deepfakes. However, a proposed 10-year national moratorium out of DC threatens to halt this state-level innovation in AI governance, potentially creating regulatory gaps during a critical period of AI development and deployment.

The big picture: States are filling the federal regulatory void with diverse approaches to AI oversight, but a proposed moratorium in the budget reconciliation bill could derail these efforts for a decade.

  • Colorado‘s AI Act stands out as one of the most comprehensive state-level approaches, establishing requirements for AI system providers and deployers.
  • The sheer volume of state bills—nearly 700 in 2024 with more expected in 2025—demonstrates the urgency lawmakers feel about addressing AI risks.
  • This state-level experimentation is occurring against the backdrop of international developments like the European Union‘s AI Act, which provides a potential model for risk-based regulation.

Key details: State legislatures are focusing on specific AI use cases that present immediate risks to consumers and democratic processes.

  • Deepfake legislation has been particularly prominent, with states working to prevent the spread of deceptive AI-generated content.
  • Election integrity concerns have driven bills aimed at regulating AI’s role in political campaigns and voting processes.
  • Public sector AI use has also received significant legislative attention, with states establishing guidelines for government deployment of automated systems.

Behind the numbers: The proposed 10-year national moratorium would create significant regulatory gaps at a critical moment in AI development.

  • Without state-level protections, consumers could face increased exposure to AI-related harms like bias, discrimination, and privacy violations.
  • Businesses would navigate uncertain compliance requirements across different jurisdictions during the moratorium period.
  • State attorneys general, who have been active in enforcing existing consumer protection laws against AI harms, would see their authority curtailed.

Why this matters: The tension between state innovation and federal preemption represents a crucial governance question that will shape the future of AI in American society.

  • Effective AI regulation requires balancing innovation with protection against potential harms across diverse contexts and communities.
  • The outcome of this regulatory debate will determine who has authority to address emerging AI risks and how quickly safeguards can be implemented.
  • Without clear standards at some level of government, both consumers and businesses face uncertainty about rights, responsibilities, and remedies regarding AI systems.
States are legislating AI, but a moratorium could impact their progress

Recent News

Idris Elba announces “African Odeon” cinema chain at SXSW London

The actor aims to address Africa's severe cinema shortage with fewer than 3,000 theaters currently serving the entire continent.

Anthropic faces Reddit lawsuit over unauthorized data use

Reddit claims Anthropic's Claude chatbot accessed its content over 100,000 times despite public promises not to scrape the platform's data.

AI communication capture transforms workplace chat into revenue streams

AI tools now extract valuable business intelligence from messaging platforms, capturing early warning signs and key insights that typically remain trapped in siloed conversations across Slack, WhatsApp, and email.