×
OpenAI Is Making a New Safety Push, But Critics Demand Even More
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI is working to make AI systems safer and more transparent, but critics say more oversight is still needed to ensure responsible AI development.

New research aims to improve AI transparency: OpenAI has unveiled a new technique that involves two AI models engaging in conversation, with one model explaining its reasoning to the other, in an effort to make the workings of AI systems more transparent and understandable to humans.

  • The research, tested on a math problem-solving AI model, encourages the AI to be more forthright and transparent in its explanations.
  • OpenAI hopes this approach, part of its long-term AI safety research plan, will be adopted and expanded upon by other researchers.

OpenAI’s recent focus on AI safety: The company has been showcasing more of its AI safety work in recent weeks, following criticism that it may be prioritizing rapid AI development over safety concerns.

  • In May, it was reported that OpenAI had disbanded a team dedicated to studying long-term AI risks.
  • The company’s cofounder and key technical leader, Ilya Sutskever, recently departed amid internal tensions.

Critics call for more oversight and accountability: While acknowledging the importance of OpenAI’s new research, some experts argue that the work is incremental and does not address the need for greater oversight of AI companies.

  • Daniel Kokotajlo, a former OpenAI researcher, states that “opaque, unaccountable, unregulated corporations” are racing to build advanced AI without adequate plans for control.
  • Another source familiar with OpenAI’s inner workings emphasizes the need for processes and governance mechanisms that prioritize societal benefit over profit.

Analyzing deeper: OpenAI’s recent efforts to showcase its AI safety research appear to be a response to growing concerns about the company’s approach to AI development. While the new technique for improving AI transparency is a step in the right direction, critics argue that it is not enough to ensure responsible AI development. As AI systems become more advanced and influential, there is a growing need for external oversight, accountability, and regulation to ensure that the technology is developed in a way that prioritizes safety and benefits society as a whole.

OpenAI Touts New AI Safety Research. Critics Say It’s a Good Step, but Not Enough

Recent News

The first mini PC with CoPilot Plus and Intel Core Ultra processors is here

Asus's new mini PC integrates dedicated AI hardware and Microsoft's Copilot Plus certification into a Mac Mini-sized desktop computer.

Leap Financial secures $3.5M for AI-powered global payments

Tech-driven lenders are helping immigrants optimize their income and credit by tracking remittances and financial flows to their home countries.

OpenAI CEO Sam Altman calls former business partner Elon Musk a ‘bully’

The legal battle exposes growing friction between Silicon Valley's competing visions for ethical AI development and corporate governance.