×
The latest AI safety researcher to quit OpenAI says he’s ‘terrified’
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI safety researcher Steven Adler left the company in mid-November 2024, citing grave concerns about the rapid pace of artificial intelligence development and the risks associated with the artificial general intelligence (AGI) race.

Key context: The departure comes amid growing scrutiny of OpenAI’s safety and ethics practices, particularly following the death of former researcher turned whistleblower Suchir Balaji.

  • Multiple whistleblowers have filed complaints with the SEC regarding allegedly restrictive nondisclosure agreements at OpenAI
  • The company faces increasing pressure over its approach to AI safety and development speed
  • Recent political developments include Trump’s promise to repeal Biden’s AI executive order, characterizing it as hindering innovation

Adler’s concerns: Having spent four years at OpenAI leading safety-related research and programs, Adler expressed deep apprehension about the current state of AI development.

  • He described the industry as stuck in a “bad equilibrium” where competition forces companies to accelerate development despite safety concerns
  • Adler emphasized that no lab currently has a solution to AI alignment
  • His personal concerns extend to fundamental life decisions, questioning humanity’s future prospects

Expert perspectives: Leading voices in the AI community have echoed Adler’s concerns about the risks associated with rapid AI development.

  • UC Berkeley Professor Stuart Russell warned that the AGI race is heading toward a cliff edge, with potential extinction-level consequences
  • The contrast between researchers’ concerns and industry leaders’ optimism is stark, with OpenAI CEO Sam Altman recently celebrating new ventures like Stargate

Recent developments: OpenAI continues to expand its offerings and partnerships despite internal safety concerns.

  • The company has launched ChatGPT Gov for U.S. government agencies
  • A new AI project called Stargate involves collaboration between OpenAI, SoftBank Group, and Oracle Corp.

Critical analysis: The growing divide between AI safety researchers and corporate leadership points to fundamental tensions in the industry’s approach to development. While companies push for rapid advancement and market dominance, those closest to the technology’s development are increasingly raising alarm bells about the potential consequences of unchecked progress. This disconnect may signal deeper structural issues in how AI development is governed and managed.

Latest OpenAI researcher to quit says he's "pretty terrified"

Recent News

AI Security Bootcamp opens applications for August session

London-based program offers fully funded, intensive training to equip AI professionals with practical security skills amid growing concerns about AI system vulnerabilities.

How businesses aid and augment workers with new tech

Research shows AI tools are complementing human workers rather than replacing them, with 75% of Workday employees reporting increased productivity while maintaining their essential interpersonal and critical thinking skills.

News to Use: Google One subscription tiers and benefits explained

Google's tiered subscription service now combines cloud storage with AI tools at various price points.