Anthropic has become the first major tech company to endorse California’s S.B. 53, a bill that would establish the first broad legal requirements for AI companies in the United States. The legislation would mandate transparency measures and safety protocols for large AI developers, transforming voluntary industry commitments into legally binding requirements that could reshape how AI companies operate nationwide.
What you should know: S.B. 53 would create mandatory transparency and safety requirements specifically targeting the most advanced AI companies.
- The bill applies only to companies building cutting-edge models requiring massive computing power, with the strictest requirements reserved for those with annual revenues exceeding $500 million.
- Companies would be required to publicly share safety guidelines, conduct “catastrophic risk” assessments, and establish emergency reporting systems for critical safety incidents.
- The legislation strengthens whistleblower protections, creating pathways for employees to report severe risks that might otherwise go unreported.
The big picture: This represents a significant shift from voluntary industry commitments to mandatory legal requirements for AI safety and transparency.
- Major AI companies including Anthropic, OpenAI, Google, and Meta have already made voluntary commitments to assess risks and implement safety measures.
- Recent research shows AI models can help users execute cyberattacks and lower barriers to acquiring biological weapons, highlighting the need for formal oversight.
- The bill largely codifies existing voluntary practices while adding enforcement mechanisms and public accountability.
Why this matters: California’s legislation could set a national precedent for AI regulation, given the state’s central role in AI development.
- As home to dozens of the world’s leading AI companies, California’s regulatory approach will likely influence AI development nationally and globally.
- The bill appears likely to pass, having received overwhelming support in both the California Assembly and Senate.
- California’s Legislature must cast its final vote by Friday night.
What they’re saying: Anthropic praised the bill’s balanced approach to safety and competition.
- “With SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety,” Anthropic said in a statement.
- Sen. Scott Wiener, the bill’s sponsor, told NBC News: “Anthropic is a leader on AI safety, and we’re really grateful for the company’s support.”
- Dan Hendrycks, executive director of the Center for AI Safety, said: “Frontier AI companies have made many voluntary commitments for safety, often without following through. This legislation takes a small but important first step toward making AI safer.”
Industry pushback: Trade groups and some companies oppose the legislation, arguing it could harm U.S. competitiveness.
- The Consumer Technology Association warned that “California SB 53 and similar bills will weaken California and U.S. leadership in AI by driving investment and jobs to states or countries with less burdensome and conflicting frameworks.”
- OpenAI’s director of Global Affairs, Chris Lehane, reaffirmed the company’s preference for federal regulation: “America leads best with clear, nationwide rules, not a patchwork of state or local regulations.”
Learning from past failures: S.B. 53 incorporates lessons from last year’s vetoed S.B. 1047, which faced broader industry opposition.
- The previous bill required annual third-party audits and barred developers from releasing models with “unreasonable risk” of causing critical harm.
- Gov. Gavin Newsom vetoed S.B. 1047, saying it would “slow the pace of innovation,” though proponents argued industry lobbying influenced the decision.
- Following the veto, Newsom formed a working group that provided recommendations incorporated into S.B. 53, emphasizing transparency over liability.
Federal vs. state regulation debate: The legislation highlights ongoing tensions between state and federal approaches to AI governance.
- Anthropic acknowledged this tension but said federal inaction necessitates state-level action: “While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington.”
- Sen. Wiener said: “Ideally we would have comprehensive, strong pro-safety, pro-innovation federal law in this space. But that has not happened, so California has a responsibility to act.”
- A recent federal spending package nearly included an amendment prohibiting states from passing AI-related legislation for 10 years, but the provision was removed in a late-night reversal.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...