×
Will OpenAI’s New AI Classification System Entice Investors or Fuel Unrealistic Expectations?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI recently unveiled a new five-tier system to gauge its progress toward developing artificial general intelligence (AGI), providing a framework for understanding AI advancement that aims to entice investors but also risks fueling unrealistic expectations.

OpenAI’s “Stages of Artificial Intelligence”: The company’s new classification system ranges from current AI capabilities to hypothetical future systems that could manage entire organizations:

  • Level 1 encompasses AI with conversational abilities, like the company’s current ChatGPT technology.
  • Level 2, dubbed “Reasoners,” would possess human-level problem-solving skills. OpenAI executives claim they are on the verge of reaching this milestone.
  • Higher levels describe increasingly potent hypothetical AI capabilities, with Level 5 envisioning AI managing entire organizations.

Progress and limitations: While OpenAI believes it is nearing a breakthrough with “reasoning” AI, the classification system is still a work in progress and describes largely hypothetical technology:

  • The company plans to gather feedback and potentially refine the levels over time.
  • There is currently no consensus in the AI research community on how to measure progress toward AGI or even if it is a well-defined, achievable goal.
  • The tech industry has a history of overpromising AI capabilities, and linear progression models like OpenAI’s risk fueling unrealistic expectations.

Comparing AI frameworks: OpenAI is not alone in attempting to quantify levels of AI capabilities, with other researchers and companies proposing their own frameworks:

  • Google DeepMind researchers proposed a five-level framework for assessing AI advancement in November 2023.
  • Anthropic’s “AI Safety Levels” focus more on safety and catastrophic risks, while OpenAI’s levels track general capabilities.
  • However, any AI classification system raises questions about the feasibility of meaningfully quantifying AI progress and what constitutes advancement.

Broader implications: OpenAI’s new classification system should be viewed primarily as a communications tool to entice investors rather than a scientific measurement of progress:

  • The pursuit of AGI drives much of the hype surrounding OpenAI, despite the potentially disruptive impact such technology could have on society.
  • CEO Sam Altman has stated his belief that AGI could be achieved within this decade, and the ranking system aligns with his public messaging about preparing for the disruption AGI may bring.
  • However, without a clear consensus on defining and measuring AGI, the five-tier system remains largely aspirational and risks contributing to inflated expectations in the AI industry.
OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

Recent News

Elon Musk acquires X for $45 billion, merging social media with his AI company

Musk's combination of social media and AI companies creates a $113 billion enterprise with X valued significantly below its 2022 purchase price.

The paradox of AI alignment: Why perfectly obedient AI might be dangerous

Strict obedience in AI systems may prevent them from developing the moral reasoning needed to make ethical decisions.

Microsoft’s Copilot for Gaming raises ethical questions about AI’s impact on human creators

Microsoft's gaming AI assistant aims to help players with strategies and recommendations while potentially undermining the human creators who provide the knowledge it draws from.