×
OpenAI’s Orion model is reportedly only somewhat better than GPT-4
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The development of advanced language models appears to be reaching a plateau, with OpenAI’s latest model showing only modest improvements over its predecessor, highlighting broader challenges in AI advancement.

Key developments: OpenAI’s upcoming “Orion” model demonstrates smaller performance gains compared to the leap between GPT-3 and GPT-4, while showing improvements primarily in language capabilities.

  • The new model may be more expensive to operate in data centers than previous versions
  • Performance improvements in areas like programming have been inconsistent
  • The quality gap between Orion and GPT-4 is notably smaller than expected

Training data challenges: OpenAI faces limitations in accessing high-quality training data, prompting new approaches to model development.

  • Most publicly available texts and data have already been utilized in training
  • The company has established a “Foundations Team” led by Nick Ryder to address these challenges
  • OpenAI is exploring synthetic data generated by existing AI models, including GPT-4 and the new “reasoning” model o1
  • This synthetic data approach risks new models merely mimicking their predecessors

Industry-wide implications: The slowdown in language model progress extends beyond OpenAI, affecting major players in the AI industry.

Leadership perspective: Despite challenges, OpenAI’s leadership maintains an optimistic outlook on AI advancement.

  • CEO Sam Altman believes the path to artificial general intelligence (AGI) remains clear
  • Altman emphasizes creative use of existing models rather than raw performance gains
  • OpenAI developer Noam Brown supports focusing on inference optimization as a “new dimension for scaling”
  • This approach requires significant financial and energy resources

Technical criticisms: Some experts question the current approach to AI development and its marketing.

  • Google AI expert François Chollet challenges the effectiveness of scaling language models for mathematical tasks
  • Chollet argues that deep learning and large language models require discrete search methods for mathematical problem-solving
  • He criticizes the use of “LLM” as a marketing term for unrelated AI advances
  • The integration of Gemini into AlphaProof is described as primarily marketing-driven

Future considerations: The AI industry faces critical questions about the sustainability and effectiveness of current development approaches, both economically and environmentally, as the returns on investment in larger models appear to diminish.

OpenAI's new "Orion" model reportedly shows small gains over GPT-4

Recent News

The first mini PC with CoPilot Plus and Intel Core Ultra processors is here

Asus's new mini PC integrates dedicated AI hardware and Microsoft's Copilot Plus certification into a Mac Mini-sized desktop computer.

Leap Financial secures $3.5M for AI-powered global payments

Tech-driven lenders are helping immigrants optimize their income and credit by tracking remittances and financial flows to their home countries.

OpenAI CEO Sam Altman calls former business partner Elon Musk a ‘bully’

The legal battle exposes growing friction between Silicon Valley's competing visions for ethical AI development and corporate governance.