back
Get SIGNAL/NOISE in your inbox daily

California’s AI bill veto: A win for innovation and open-source development: Governor Gavin Newsom’s decision to veto SB 1047, a bill that would have imposed strict regulations on AI development in California, has sparked mixed reactions from industry leaders and policy experts.

  • The vetoed bill would have required AI companies to implement “kill switches” for models, create written safety protocols, and undergo third-party safety audits before training models.
  • It would have also granted California’s attorney general access to auditors’ reports and the right to sue AI developers.
  • Critics of the bill argued that it could have a chilling effect on AI development, particularly for smaller companies and open-source projects.

Industry reactions and competitive landscape: Many AI industry veterans and tech leaders have expressed support for Newsom’s decision, viewing it as a protection for innovation and open-source development.

  • Yann Le Cun, chief AI scientist at Meta, described the veto as “sensible,” while prominent AI investor Marc Andreessen praised Newsom for siding with “California Dynamism, economic growth, and freedom to compute.”
  • Mike Capone, CEO of Qlik, emphasized the need to focus on the applications of AI models rather than the technology itself, suggesting that regulatory frameworks should prioritize safe and ethical usage.
  • Andrew Ng, co-founder of Coursera, characterized the veto as “pro-innovation” and beneficial for open-source development.

Expert analysis and implications: Policy experts and academics have weighed in on the potential consequences of the veto, highlighting both opportunities and challenges for the AI industry.

  • Dean Ball, an AI and tech policy expert at George Mason University’s Mercatus Center, argued that the bill’s model size thresholds were becoming outdated and would not have encompassed recent models like OpenAI’s o1.
  • Lav Varshney, associate professor at the University of Illinois, noted that the bill’s provisions on downstream uses and modifications of AI models could have hindered open-source innovation.
  • The veto may allow AI companies to proactively strengthen their safety policies and governance practices, according to Kjell Carlsson of Domino Data Lab and Navrina Singh of Credo AI.

Dissenting voices and concerns: Not all reactions to the veto have been positive, with some tech policy and safety groups expressing disappointment and concern.

  • Nicole Gill, co-founder of Accountable Tech, criticized the decision as a “massive giveaway to Big Tech companies” that could potentially threaten democracy, civil rights, and the environment.
  • The AI Policy Institute’s executive director, Daniel Colson, called the veto “misguided, reckless, and out of step” with the public’s demands for AI regulation.
  • These groups argue that California, home to many AI companies, is allowing AI development to proceed unchecked despite public concerns about the technology’s capabilities and potential risks.

Regulatory landscape and future outlook: The veto highlights the ongoing debate surrounding AI regulation in the United States and the challenges of balancing innovation with safety and ethical concerns.

  • Currently, there is no federal regulation specifically addressing generative AI in the United States, although some states have developed policies on AI usage.
  • President Biden’s executive order represents the closest thing to federal policy, outlining plans for government agencies to use AI systems and requesting voluntary model submissions from AI companies for evaluation.
  • The Biden administration has also expressed intentions to monitor open-weight models for potential risks.

Balancing innovation and responsibility: The veto of SB 1047 underscores the complex challenge of regulating AI development while fostering innovation and protecting open-source initiatives.

  • While the decision has been celebrated by many in the tech industry, it also raises questions about how to effectively address public concerns and potential risks associated with AI technology.
  • The coming months and years will likely see continued debate and policy discussions as stakeholders seek to strike a balance between technological progress and responsible AI development.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...