×
Anthropic CEO opposes 10-year AI regulation ban in NYT op-ed
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Anthropic‘s CEO has challenged a proposed Republican moratorium on state-level AI regulation, arguing that the rapid pace of AI development requires more nuanced policy approaches. This intervention from one of the major AI companies underscores the growing tension between federal preemption and state-level regulation of artificial intelligence technologies, highlighting the need for coordinated governance frameworks that balance innovation with appropriate safeguards.

The big picture: Anthropic CEO Dario Amodei has publicly opposed a Republican proposal that would block states from regulating artificial intelligence for a decade, calling it “too blunt an instrument” given AI’s rapid advancement.

Key details: The proposal, included in President Trump‘s tax cut bill, would preempt AI laws recently passed by numerous states.

  • A bipartisan group of attorneys general who have regulated high-risk uses of AI technology have already voiced opposition to the moratorium.
  • Amodei made his case in a New York Times opinion piece, warning the proposal would create “the worst of both worlds” with neither state regulatory authority nor federal policy as a backstop.

The alternative proposal: Instead of a blanket moratorium, Amodei advocates for federal transparency standards requiring AI developers to disclose their safety testing practices.

  • Such standards would mandate that companies developing powerful models adopt specific testing policies and publicly share their approaches to mitigating national security and other risks.
  • Developers would need to demonstrate their models were thoroughly safety-tested before public release.

What they’re saying: “A 10-year moratorium is far too blunt an instrument. AI is advancing too head-spinningly fast,” Amodei wrote in his opinion piece.

Between the lines: Major AI companies including Anthropic, OpenAI, and Google DeepMind already voluntarily release safety information, but Amodei suggests legislative incentives may become necessary.

  • As AI models become more powerful, corporate incentives for transparency could change, potentially requiring regulatory enforcement.
Anthropic CEO says proposed 10-year ban on state AI regulation 'too blunt' in NYT op-ed

Recent News

How Walmart built one of the world’s largest enterprise AI operations

Trust emerges through value delivery, not training programs, as employees embrace tools that solve real problems.

LinkedIn’s multi-agent AI hiring assistant goes live for recruiters

LinkedIn's AI architecture functions like "Lego blocks," allowing recruiters to focus on nurturing talent instead of tedious searches.

Healthcare AI hallucinates medical data up to 75% of the time, low frequency events most affected

False alarms vastly outnumber true positives, creating disruptive noise in clinical settings.