×
OpenAI Unveils AI Progress Scale, Sparking Debate Over AGI Timeline and Safety Concerns
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI has introduced an internal scale to track the progress of its AI systems toward artificial general intelligence, providing a framework for evaluating the capabilities of its models and setting milestones for future advancements.

Key takeaways from the OpenAI scale:

  • The scale ranges from Level 1 to Level 5, with each level representing a significant advancement in AI capabilities.
  • Current chatbots like ChatGPT are at Level 1, while OpenAI claims to be nearing Level 2, which is defined as an AI system capable of solving basic problems at the level of a person with a PhD.
  • The highest level, Level 5, represents the achievement of artificial general intelligence (AGI), where AI can perform the work of entire organizations of people.

Implications for the AI industry: The introduction of this scale provides a clearer definition of progress in the development of advanced AI systems:

  • The scale could help OpenAI determine when to assist other companies that are close to achieving AGI, as outlined in its charter.
  • It may also serve as a benchmark for evaluating the capabilities of AI models developed by other organizations.

Collaboration with Los Alamos National Laboratory: OpenAI’s partnership with the laboratory aims to explore the potential of advanced AI models in bioscientific research:

  • The goal is to test GPT-4o’s capabilities and establish safety and other factors for the US government.
  • Eventually, public or private models can be tested against these factors to evaluate their own models.

Concerns about OpenAI’s AI safety practices: Recent developments have raised questions about the company’s commitment to AI safety:

  • In May, OpenAI dissolved its safety team after the departure of key personnel, including cofounder Ilya Sutskever and researcher Jan Leike.
  • Leike claimed that “safety culture and processes have taken a backseat to shiny products” at the company, although OpenAI denied this.

Analyzing deeper: While the introduction of the AI progress scale is a significant step toward defining and measuring advancements in AI capabilities, there are still many unanswered questions and potential concerns:

  • The specific criteria and methods used to assign AI models to the different levels on the scale have not been disclosed, leaving room for interpretation and potential inconsistencies.
  • The dissolution of OpenAI’s safety team raises concerns about the company’s commitment to responsible AI development, especially if it does indeed achieve AGI in the future.
  • The timeline for reaching AGI remains uncertain, with estimates varying widely among experts and even within OpenAI itself.

As OpenAI continues to push the boundaries of AI development, it will be crucial for the company to maintain transparency and prioritize safety measures to ensure that the pursuit of AGI aligns with the best interests of society as a whole.

Here’s how OpenAI will determine how powerful its AI systems are

Recent News

Baidu reports steepest revenue drop in 2 years amid slowdown

China's tech giant Baidu saw revenue drop 3% despite major AI investments, signaling broader challenges for the nation's technology sector amid economic headwinds.

How to manage risk in the age of AI

A conversation with Palo Alto Networks CEO about his approach to innovation as new technologies and risks emerge.

How to balance bold, responsible and successful AI deployment

Major companies are establishing AI governance structures and training programs while racing to deploy generative AI for competitive advantage.