×
California AI safety bill veto may give smaller AI models a chance to flourish
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

California’s AI bill veto: A win for innovation and open-source development: Governor Gavin Newsom’s decision to veto SB 1047, a bill that would have imposed strict regulations on AI development in California, has sparked mixed reactions from industry leaders and policy experts.

  • The vetoed bill would have required AI companies to implement “kill switches” for models, create written safety protocols, and undergo third-party safety audits before training models.
  • It would have also granted California’s attorney general access to auditors’ reports and the right to sue AI developers.
  • Critics of the bill argued that it could have a chilling effect on AI development, particularly for smaller companies and open-source projects.

Industry reactions and competitive landscape: Many AI industry veterans and tech leaders have expressed support for Newsom’s decision, viewing it as a protection for innovation and open-source development.

  • Yann Le Cun, chief AI scientist at Meta, described the veto as “sensible,” while prominent AI investor Marc Andreessen praised Newsom for siding with “California Dynamism, economic growth, and freedom to compute.”
  • Mike Capone, CEO of Qlik, emphasized the need to focus on the applications of AI models rather than the technology itself, suggesting that regulatory frameworks should prioritize safe and ethical usage.
  • Andrew Ng, co-founder of Coursera, characterized the veto as “pro-innovation” and beneficial for open-source development.

Expert analysis and implications: Policy experts and academics have weighed in on the potential consequences of the veto, highlighting both opportunities and challenges for the AI industry.

  • Dean Ball, an AI and tech policy expert at George Mason University’s Mercatus Center, argued that the bill’s model size thresholds were becoming outdated and would not have encompassed recent models like OpenAI’s o1.
  • Lav Varshney, associate professor at the University of Illinois, noted that the bill’s provisions on downstream uses and modifications of AI models could have hindered open-source innovation.
  • The veto may allow AI companies to proactively strengthen their safety policies and governance practices, according to Kjell Carlsson of Domino Data Lab and Navrina Singh of Credo AI.

Dissenting voices and concerns: Not all reactions to the veto have been positive, with some tech policy and safety groups expressing disappointment and concern.

  • Nicole Gill, co-founder of Accountable Tech, criticized the decision as a “massive giveaway to Big Tech companies” that could potentially threaten democracy, civil rights, and the environment.
  • The AI Policy Institute’s executive director, Daniel Colson, called the veto “misguided, reckless, and out of step” with the public’s demands for AI regulation.
  • These groups argue that California, home to many AI companies, is allowing AI development to proceed unchecked despite public concerns about the technology’s capabilities and potential risks.

Regulatory landscape and future outlook: The veto highlights the ongoing debate surrounding AI regulation in the United States and the challenges of balancing innovation with safety and ethical concerns.

  • Currently, there is no federal regulation specifically addressing generative AI in the United States, although some states have developed policies on AI usage.
  • President Biden’s executive order represents the closest thing to federal policy, outlining plans for government agencies to use AI systems and requesting voluntary model submissions from AI companies for evaluation.
  • The Biden administration has also expressed intentions to monitor open-weight models for potential risks.

Balancing innovation and responsibility: The veto of SB 1047 underscores the complex challenge of regulating AI development while fostering innovation and protecting open-source initiatives.

  • While the decision has been celebrated by many in the tech industry, it also raises questions about how to effectively address public concerns and potential risks associated with AI technology.
  • The coming months and years will likely see continued debate and policy discussions as stakeholders seek to strike a balance between technological progress and responsible AI development.
California AI bill veto could allow smaller devs, models to ‘flourish’

Recent News

AI-powered computers are adding more time to workers’ tasks, but there’s a catch

Early AI PC adopters report spending more time on tasks than traditional computer users, signaling growing pains in the technology's implementation.

The global bootcamp that teaches intensive AI safety programming classes

Global bootcamp program trains next wave of AI safety professionals through intensive 10-day courses funded by Open Philanthropy.

‘Anti-scale’ and how to save journalism in an automated world

Struggling news organizations seek to balance AI adoption with growing public distrust, as the industry pivots toward community-focused journalism over content volume.