×
The latest AI safety researcher to quit OpenAI says he’s ‘terrified’
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI safety researcher Steven Adler left the company in mid-November 2024, citing grave concerns about the rapid pace of artificial intelligence development and the risks associated with the artificial general intelligence (AGI) race.

Key context: The departure comes amid growing scrutiny of OpenAI’s safety and ethics practices, particularly following the death of former researcher turned whistleblower Suchir Balaji.

  • Multiple whistleblowers have filed complaints with the SEC regarding allegedly restrictive nondisclosure agreements at OpenAI
  • The company faces increasing pressure over its approach to AI safety and development speed
  • Recent political developments include Trump’s promise to repeal Biden’s AI executive order, characterizing it as hindering innovation

Adler’s concerns: Having spent four years at OpenAI leading safety-related research and programs, Adler expressed deep apprehension about the current state of AI development.

  • He described the industry as stuck in a “bad equilibrium” where competition forces companies to accelerate development despite safety concerns
  • Adler emphasized that no lab currently has a solution to AI alignment
  • His personal concerns extend to fundamental life decisions, questioning humanity’s future prospects

Expert perspectives: Leading voices in the AI community have echoed Adler’s concerns about the risks associated with rapid AI development.

  • UC Berkeley Professor Stuart Russell warned that the AGI race is heading toward a cliff edge, with potential extinction-level consequences
  • The contrast between researchers’ concerns and industry leaders’ optimism is stark, with OpenAI CEO Sam Altman recently celebrating new ventures like Stargate

Recent developments: OpenAI continues to expand its offerings and partnerships despite internal safety concerns.

  • The company has launched ChatGPT Gov for U.S. government agencies
  • A new AI project called Stargate involves collaboration between OpenAI, SoftBank Group, and Oracle Corp.

Critical analysis: The growing divide between AI safety researchers and corporate leadership points to fundamental tensions in the industry’s approach to development. While companies push for rapid advancement and market dominance, those closest to the technology’s development are increasingly raising alarm bells about the potential consequences of unchecked progress. This disconnect may signal deeper structural issues in how AI development is governed and managed.

Latest OpenAI researcher to quit says he's "pretty terrified"

Recent News

Anticipatory AI: Crunchbase transforms into prediction platform with 95% funding forecast accuracy

Crunchbase's new AI system analyzes company data and user behavior patterns to predict which startups will secure their next round of funding.

Contextual AI’s new grounded language model beats Google, OpenAI on factual accuracy

A focused language model beats industry giants by prioritizing factual responses over general-purpose abilities, scoring 88% on accuracy tests.

Autel’s AI-powered EV charger drops to $399, bringing smart home charging to more users

Price drop brings voice-controlled EV home charging and AI features to consumers at under $400, making premium charging more accessible.