×
The latest AI safety researcher to quit OpenAI says he’s ‘terrified’
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI safety researcher Steven Adler left the company in mid-November 2024, citing grave concerns about the rapid pace of artificial intelligence development and the risks associated with the artificial general intelligence (AGI) race.

Key context: The departure comes amid growing scrutiny of OpenAI’s safety and ethics practices, particularly following the death of former researcher turned whistleblower Suchir Balaji.

  • Multiple whistleblowers have filed complaints with the SEC regarding allegedly restrictive nondisclosure agreements at OpenAI
  • The company faces increasing pressure over its approach to AI safety and development speed
  • Recent political developments include Trump’s promise to repeal Biden’s AI executive order, characterizing it as hindering innovation

Adler’s concerns: Having spent four years at OpenAI leading safety-related research and programs, Adler expressed deep apprehension about the current state of AI development.

  • He described the industry as stuck in a “bad equilibrium” where competition forces companies to accelerate development despite safety concerns
  • Adler emphasized that no lab currently has a solution to AI alignment
  • His personal concerns extend to fundamental life decisions, questioning humanity’s future prospects

Expert perspectives: Leading voices in the AI community have echoed Adler’s concerns about the risks associated with rapid AI development.

  • UC Berkeley Professor Stuart Russell warned that the AGI race is heading toward a cliff edge, with potential extinction-level consequences
  • The contrast between researchers’ concerns and industry leaders’ optimism is stark, with OpenAI CEO Sam Altman recently celebrating new ventures like Stargate

Recent developments: OpenAI continues to expand its offerings and partnerships despite internal safety concerns.

  • The company has launched ChatGPT Gov for U.S. government agencies
  • A new AI project called Stargate involves collaboration between OpenAI, SoftBank Group, and Oracle Corp.

Critical analysis: The growing divide between AI safety researchers and corporate leadership points to fundamental tensions in the industry’s approach to development. While companies push for rapid advancement and market dominance, those closest to the technology’s development are increasingly raising alarm bells about the potential consequences of unchecked progress. This disconnect may signal deeper structural issues in how AI development is governed and managed.

Latest OpenAI researcher to quit says he's "pretty terrified"

Recent News

AI code review tools fall short of solving real developer problems

Current AI code review solutions enhance code quality for authors but fail to address the real bottleneck: senior engineers spending valuable time on manual reviews rather than development.

Kubernetes prove crucial in the AI era, boosting a private cloud resurgence

Private cloud deployments are gaining momentum as organizations seek greater data control while building AI capabilities on Kubernetes infrastructure.

AI skin cancer screening debuts at London hospital with iPhone assistance

AI system at London hospital screens skin lesions without doctors, correctly classifying benign cases with 99% accuracy to reduce specialist wait times.