×
OpenAI’s Orion model is reportedly only somewhat better than GPT-4
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The development of advanced language models appears to be reaching a plateau, with OpenAI’s latest model showing only modest improvements over its predecessor, highlighting broader challenges in AI advancement.

Key developments: OpenAI’s upcoming “Orion” model demonstrates smaller performance gains compared to the leap between GPT-3 and GPT-4, while showing improvements primarily in language capabilities.

  • The new model may be more expensive to operate in data centers than previous versions
  • Performance improvements in areas like programming have been inconsistent
  • The quality gap between Orion and GPT-4 is notably smaller than expected

Training data challenges: OpenAI faces limitations in accessing high-quality training data, prompting new approaches to model development.

  • Most publicly available texts and data have already been utilized in training
  • The company has established a “Foundations Team” led by Nick Ryder to address these challenges
  • OpenAI is exploring synthetic data generated by existing AI models, including GPT-4 and the new “reasoning” model o1
  • This synthetic data approach risks new models merely mimicking their predecessors

Industry-wide implications: The slowdown in language model progress extends beyond OpenAI, affecting major players in the AI industry.

Leadership perspective: Despite challenges, OpenAI’s leadership maintains an optimistic outlook on AI advancement.

  • CEO Sam Altman believes the path to artificial general intelligence (AGI) remains clear
  • Altman emphasizes creative use of existing models rather than raw performance gains
  • OpenAI developer Noam Brown supports focusing on inference optimization as a “new dimension for scaling”
  • This approach requires significant financial and energy resources

Technical criticisms: Some experts question the current approach to AI development and its marketing.

  • Google AI expert François Chollet challenges the effectiveness of scaling language models for mathematical tasks
  • Chollet argues that deep learning and large language models require discrete search methods for mathematical problem-solving
  • He criticizes the use of “LLM” as a marketing term for unrelated AI advances
  • The integration of Gemini into AlphaProof is described as primarily marketing-driven

Future considerations: The AI industry faces critical questions about the sustainability and effectiveness of current development approaches, both economically and environmentally, as the returns on investment in larger models appear to diminish.

OpenAI's new "Orion" model reportedly shows small gains over GPT-4

Recent News

ChatGPT upgrade propels OpenAI back to top of LLM rankings

OpenAI's latest GPT-4 upgrades outperform Google's Gemini in comprehensive testing, marking notable advances in file processing and creative tasks.

AI reporter fired after replacing human journalist

AI news anchors failed to master Hawaiian pronunciations and connect with local viewers, highlighting technological and cultural barriers to automated journalism.

4 strategies to safeguard your artwork from AI

Artists increasingly adopt defensive tools and legal measures as AI companies continue harvesting their work without consent for training data.