back
Get SIGNAL/NOISE in your inbox daily

State-level AI regulation is accelerating rapidly in the absence of federal action, with nearly 700 bills introduced in 2024 alone. This legislative surge reflects growing concerns about AI risks and consumer protections, ranging from comprehensive frameworks to targeted measures addressing specific harms like deepfakes. However, a proposed 10-year national moratorium out of DC threatens to halt this state-level innovation in AI governance, potentially creating regulatory gaps during a critical period of AI development and deployment.

The big picture: States are filling the federal regulatory void with diverse approaches to AI oversight, but a proposed moratorium in the budget reconciliation bill could derail these efforts for a decade.

  • Colorado‘s AI Act stands out as one of the most comprehensive state-level approaches, establishing requirements for AI system providers and deployers.
  • The sheer volume of state bills—nearly 700 in 2024 with more expected in 2025—demonstrates the urgency lawmakers feel about addressing AI risks.
  • This state-level experimentation is occurring against the backdrop of international developments like the European Union‘s AI Act, which provides a potential model for risk-based regulation.

Key details: State legislatures are focusing on specific AI use cases that present immediate risks to consumers and democratic processes.

  • Deepfake legislation has been particularly prominent, with states working to prevent the spread of deceptive AI-generated content.
  • Election integrity concerns have driven bills aimed at regulating AI’s role in political campaigns and voting processes.
  • Public sector AI use has also received significant legislative attention, with states establishing guidelines for government deployment of automated systems.

Behind the numbers: The proposed 10-year national moratorium would create significant regulatory gaps at a critical moment in AI development.

  • Without state-level protections, consumers could face increased exposure to AI-related harms like bias, discrimination, and privacy violations.
  • Businesses would navigate uncertain compliance requirements across different jurisdictions during the moratorium period.
  • State attorneys general, who have been active in enforcing existing consumer protection laws against AI harms, would see their authority curtailed.

Why this matters: The tension between state innovation and federal preemption represents a crucial governance question that will shape the future of AI in American society.

  • Effective AI regulation requires balancing innovation with protection against potential harms across diverse contexts and communities.
  • The outcome of this regulatory debate will determine who has authority to address emerging AI risks and how quickly safeguards can be implemented.
  • Without clear standards at some level of government, both consumers and businesses face uncertainty about rights, responsibilities, and remedies regarding AI systems.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...