×
OpenAI’s New o-1 Model Is Already Sparking Safety Concerns
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Groundbreaking AI model raises safety concerns: OpenAI’s new o1-preview model, designed for enhanced reasoning capabilities, has sparked warnings from AI experts about potential risks associated with increasingly capable artificial intelligence systems.

  • OpenAI’s o1-preview model, codenamed ‘Project Strawberry’, is now available for ChatGPT Pro subscribers and through the company’s API.
  • The model demonstrates significant improvements in problem-solving abilities across various fields, including mathematics, coding, and scientific disciplines.
  • OpenAI also introduced o1-mini, a faster and more affordable version of the reasoning model, particularly effective for coding applications.

Performance benchmarks: The new o1-preview model has shown remarkable improvements in various challenging tasks, outperforming its predecessors and human experts in some areas.

  • In International Mathematics Olympiad qualifying exams, the model correctly solved 83% of problems, compared to only 13% solved by its predecessor, GPT-4o.
  • The model reached the 89th percentile in Codeforces coding competitions.
  • Its performance in physics, chemistry, and biology benchmark tasks is reportedly similar to that of PhD students.

Expert warnings: AI pioneer Professor Yoshua Bengio and other experts have expressed concerns about the potential dangers of these advanced AI models.

  • Bengio warned that the improvement in AI’s reasoning and deception capabilities is particularly dangerous, emphasizing the need for regulatory solutions like California’s proposed AI safety bill SB 1047.
  • Dan Hendrycks, director of the Center for AI Safety, stated that the new model makes it clear that serious risk from AI is not a far-off, science-fiction fantasy.
  • Experts are calling for immediate action to implement safety measures and regulations for frontier AI models.

Proposed legislation: California’s SB 1047 bill aims to establish safety requirements for advanced AI systems that could potentially cause catastrophic harm.

  • The bill targets future AI models that meet specific criteria for causing severe harm, including those used to create or deploy weapons of mass destruction or cause significant damage through cyberattacks.
  • To qualify, models must have cost over $100 million to train and require substantial computing power.
  • The legislation would require developers to take reasonable care to prevent unreasonable risks, including creating a kill switch and developing means to determine potential harmful behavior.

Legal considerations: The implementation of AI safety legislation raises questions about determining causation between AI models and potential catastrophic harm.

  • Abigail Rekas, a law and policy scholar, explains that proving causation in potential lawsuits would require demonstrating that the harm would not have occurred without the AI model.
  • The speculative nature of potential harm from future AI systems makes it challenging to predict how difficult proving causation might be.

OpenAI’s safety measures: In response to safety concerns, OpenAI claims to have implemented various measures to address potential risks associated with its new models.

  • The company has developed a new safety training approach leveraging the models’ reasoning capabilities to better adhere to safety and alignment guidelines.
  • OpenAI reports improved performance in “jailbreaking” tests, with o1-preview scoring 84 out of 100 compared to GPT-4o’s score of 22.
  • The company has increased its safety work, internal governance, and collaboration with federal government agencies.
  • OpenAI has formalized agreements with U.S. and U.K. AI Safety Institutes, granting them early access to a research version of the model for evaluation and testing.

Balancing innovation and safety: As AI capabilities continue to advance rapidly, the development of o1-preview highlights the ongoing challenge of balancing technological progress with responsible innovation and public safety.

  • The impressive performance of the new model in complex tasks demonstrates the potential benefits of advanced AI systems in various fields.
  • However, the concerns raised by experts underscore the need for proactive measures to mitigate potential risks associated with increasingly capable AI models.
  • The debate surrounding AI safety legislation and the implementation of regulatory frameworks is likely to intensify as these technologies continue to evolve.
OpenAI o1 model warning issued by scientist: "Particularly dangerous"

Recent News

MIT research evaluates driver behavior to advance autonomous driving tech

Researchers find driver trust and behavior patterns are more critical to autonomous vehicle adoption than technical capabilities, with acceptance levels showing first uptick in years.

Inside Microsoft’s plan to ensure every business has an AI Agent

Microsoft's shift toward AI assistants marks its largest interface change since the introduction of Windows, as the company integrates automated helpers across its entire software ecosystem.

Chinese AI model LLaVA-o1 rivals OpenAI’s o1 in new study

New open-source AI model from China matches Silicon Valley's best at visual reasoning tasks while making its code freely available to researchers.