×
OpenAI Is Making a New Safety Push, But Critics Demand Even More
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI is working to make AI systems safer and more transparent, but critics say more oversight is still needed to ensure responsible AI development.

New research aims to improve AI transparency: OpenAI has unveiled a new technique that involves two AI models engaging in conversation, with one model explaining its reasoning to the other, in an effort to make the workings of AI systems more transparent and understandable to humans.

  • The research, tested on a math problem-solving AI model, encourages the AI to be more forthright and transparent in its explanations.
  • OpenAI hopes this approach, part of its long-term AI safety research plan, will be adopted and expanded upon by other researchers.

OpenAI’s recent focus on AI safety: The company has been showcasing more of its AI safety work in recent weeks, following criticism that it may be prioritizing rapid AI development over safety concerns.

  • In May, it was reported that OpenAI had disbanded a team dedicated to studying long-term AI risks.
  • The company’s cofounder and key technical leader, Ilya Sutskever, recently departed amid internal tensions.

Critics call for more oversight and accountability: While acknowledging the importance of OpenAI’s new research, some experts argue that the work is incremental and does not address the need for greater oversight of AI companies.

  • Daniel Kokotajlo, a former OpenAI researcher, states that “opaque, unaccountable, unregulated corporations” are racing to build advanced AI without adequate plans for control.
  • Another source familiar with OpenAI’s inner workings emphasizes the need for processes and governance mechanisms that prioritize societal benefit over profit.

Analyzing deeper: OpenAI’s recent efforts to showcase its AI safety research appear to be a response to growing concerns about the company’s approach to AI development. While the new technique for improving AI transparency is a step in the right direction, critics argue that it is not enough to ensure responsible AI development. As AI systems become more advanced and influential, there is a growing need for external oversight, accountability, and regulation to ensure that the technology is developed in a way that prioritizes safety and benefits society as a whole.

OpenAI Touts New AI Safety Research. Critics Say It’s a Good Step, but Not Enough

Recent News

AI data center powerhouse attracts attention from Jim Cramer’s Charitable Trust

GE Vernova sees rising investment as power demand surges from AI data centers and global electrification needs.

Notion unveils comprehensive AI toolkit to boost productivity

The productivity software company integrates suite-wide AI tools like meeting transcription and cross-platform search at a lower cost than standalone alternatives.

AI-powered crypto trading bots still face major hurdles

AI trading bots can be tricked into redirecting cryptocurrency payments through simple text inputs that implant false memories in their systems.