×
OpenAI Is Making a New Safety Push, But Critics Demand Even More
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI is working to make AI systems safer and more transparent, but critics say more oversight is still needed to ensure responsible AI development.

New research aims to improve AI transparency: OpenAI has unveiled a new technique that involves two AI models engaging in conversation, with one model explaining its reasoning to the other, in an effort to make the workings of AI systems more transparent and understandable to humans.

  • The research, tested on a math problem-solving AI model, encourages the AI to be more forthright and transparent in its explanations.
  • OpenAI hopes this approach, part of its long-term AI safety research plan, will be adopted and expanded upon by other researchers.

OpenAI’s recent focus on AI safety: The company has been showcasing more of its AI safety work in recent weeks, following criticism that it may be prioritizing rapid AI development over safety concerns.

  • In May, it was reported that OpenAI had disbanded a team dedicated to studying long-term AI risks.
  • The company’s cofounder and key technical leader, Ilya Sutskever, recently departed amid internal tensions.

Critics call for more oversight and accountability: While acknowledging the importance of OpenAI’s new research, some experts argue that the work is incremental and does not address the need for greater oversight of AI companies.

  • Daniel Kokotajlo, a former OpenAI researcher, states that “opaque, unaccountable, unregulated corporations” are racing to build advanced AI without adequate plans for control.
  • Another source familiar with OpenAI’s inner workings emphasizes the need for processes and governance mechanisms that prioritize societal benefit over profit.

Analyzing deeper: OpenAI’s recent efforts to showcase its AI safety research appear to be a response to growing concerns about the company’s approach to AI development. While the new technique for improving AI transparency is a step in the right direction, critics argue that it is not enough to ensure responsible AI development. As AI systems become more advanced and influential, there is a growing need for external oversight, accountability, and regulation to ensure that the technology is developed in a way that prioritizes safety and benefits society as a whole.

OpenAI Touts New AI Safety Research. Critics Say It’s a Good Step, but Not Enough

Recent News

AI and trade dominate Davos 2025 World Economic Forum

Global leaders grapple with AI safety concerns and regional trade tensions as traditional alliances face mounting pressure.

AI startup Rapport enables anyone to bring branded AI avatars to life with ChatGPT

New platform enables businesses to create custom AI avatars without technical expertise, integrating with ChatGPT for brand-specific customer interactions.

McKinsey and C3 partner to accelerate AI adoption in manufacturing, energy and finance

The partnership aims to deliver pre-built AI solutions to large companies, reducing the typical 18-24 month implementation timeline for enterprise AI projects.