OpenAI safety researcher Steven Adler left the company in mid-November 2024, citing grave concerns about the rapid pace of artificial intelligence development and the risks associated with the artificial general intelligence (AGI) race.
Key context: The departure comes amid growing scrutiny of OpenAI’s safety and ethics practices, particularly following the death of former researcher turned whistleblower Suchir Balaji.
- Multiple whistleblowers have filed complaints with the SEC regarding allegedly restrictive nondisclosure agreements at OpenAI
- The company faces increasing pressure over its approach to AI safety and development speed
- Recent political developments include Trump’s promise to repeal Biden’s AI executive order, characterizing it as hindering innovation
Adler’s concerns: Having spent four years at OpenAI leading safety-related research and programs, Adler expressed deep apprehension about the current state of AI development.
- He described the industry as stuck in a “bad equilibrium” where competition forces companies to accelerate development despite safety concerns
- Adler emphasized that no lab currently has a solution to AI alignment
- His personal concerns extend to fundamental life decisions, questioning humanity’s future prospects
Expert perspectives: Leading voices in the AI community have echoed Adler’s concerns about the risks associated with rapid AI development.
- UC Berkeley Professor Stuart Russell warned that the AGI race is heading toward a cliff edge, with potential extinction-level consequences
- The contrast between researchers’ concerns and industry leaders’ optimism is stark, with OpenAI CEO Sam Altman recently celebrating new ventures like Stargate
Recent developments: OpenAI continues to expand its offerings and partnerships despite internal safety concerns.
- The company has launched ChatGPT Gov for U.S. government agencies
- A new AI project called Stargate involves collaboration between OpenAI, SoftBank Group, and Oracle Corp.
Critical analysis: The growing divide between AI safety researchers and corporate leadership points to fundamental tensions in the industry’s approach to development. While companies push for rapid advancement and market dominance, those closest to the technology’s development are increasingly raising alarm bells about the potential consequences of unchecked progress. This disconnect may signal deeper structural issues in how AI development is governed and managed.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...