×
The latest AI safety researcher to quit OpenAI says he’s ‘terrified’
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI safety researcher Steven Adler left the company in mid-November 2024, citing grave concerns about the rapid pace of artificial intelligence development and the risks associated with the artificial general intelligence (AGI) race.

Key context: The departure comes amid growing scrutiny of OpenAI’s safety and ethics practices, particularly following the death of former researcher turned whistleblower Suchir Balaji.

  • Multiple whistleblowers have filed complaints with the SEC regarding allegedly restrictive nondisclosure agreements at OpenAI
  • The company faces increasing pressure over its approach to AI safety and development speed
  • Recent political developments include Trump’s promise to repeal Biden’s AI executive order, characterizing it as hindering innovation

Adler’s concerns: Having spent four years at OpenAI leading safety-related research and programs, Adler expressed deep apprehension about the current state of AI development.

  • He described the industry as stuck in a “bad equilibrium” where competition forces companies to accelerate development despite safety concerns
  • Adler emphasized that no lab currently has a solution to AI alignment
  • His personal concerns extend to fundamental life decisions, questioning humanity’s future prospects

Expert perspectives: Leading voices in the AI community have echoed Adler’s concerns about the risks associated with rapid AI development.

  • UC Berkeley Professor Stuart Russell warned that the AGI race is heading toward a cliff edge, with potential extinction-level consequences
  • The contrast between researchers’ concerns and industry leaders’ optimism is stark, with OpenAI CEO Sam Altman recently celebrating new ventures like Stargate

Recent developments: OpenAI continues to expand its offerings and partnerships despite internal safety concerns.

  • The company has launched ChatGPT Gov for U.S. government agencies
  • A new AI project called Stargate involves collaboration between OpenAI, SoftBank Group, and Oracle Corp.

Critical analysis: The growing divide between AI safety researchers and corporate leadership points to fundamental tensions in the industry’s approach to development. While companies push for rapid advancement and market dominance, those closest to the technology’s development are increasingly raising alarm bells about the potential consequences of unchecked progress. This disconnect may signal deeper structural issues in how AI development is governed and managed.

Latest OpenAI researcher to quit says he's "pretty terrified"

Recent News

AI agents gain capability to use Windows applications using PigAPI’s cloud virtual desktops

Virtual desktop AI agents navigate and control legacy Windows software to bridge the automation gap for enterprises stuck with outdated systems.

A look into generative AI’s changing impacts on marketing

Corporate investment in AI tools shifts away from consumer chatbots to focus on workplace productivity and automation solutions.

California AG warns AI firms that most of what they’re doing is illegal

California issues first-of-its-kind warning to AI companies about deceptive chatbots, misleading claims, and discriminatory algorithms.