×
OpenAI’s Deep Research AI model sets new record on industry’s hardest benchmark
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI’s Deep Research tool has achieved a record-breaking 26.6% accuracy score on Humanity’s Last Exam, marking a significant improvement in AI performance on complex reasoning tasks.

Key breakthrough: OpenAI‘s Deep Research has set a new performance record on Humanity’s Last Exam, a benchmark designed to test AI systems with some of the most challenging reasoning problems available.

  • The tool achieved 26.6% accuracy, representing a 183% improvement in less than two weeks
  • OpenAI’s ChatGPT o3-mini scored 10.5% accuracy at standard settings and 13% at high-capacity settings
  • DeepSeek R1, the previous leader, had achieved 9.4% accuracy on text-only evaluation

Technical context: Humanity’s Last Exam serves as a comprehensive benchmark for testing advanced AI capabilities and reasoning abilities.

  • The exam consists of extremely complex problems that challenge even human experts
  • The benchmark includes both general knowledge questions and complex reasoning problems
  • Current scores demonstrate both significant progress and substantial room for improvement in AI capabilities

Performance analysis: Deep Research’s superior performance comes with important caveats that affect result interpretation.

  • The tool’s web search capabilities give it an advantage over other AI models
  • Despite the dramatic improvement, a 26.6% accuracy rate would still be considered a failing grade by traditional educational standards
  • The rapid rate of improvement suggests potential for continued advancement in AI reasoning capabilities

Model comparison: The benchmark reveals significant performance variations among leading AI models.

  • ChatGPT o3-mini shows differential performance based on capacity settings
  • Deep Research’s search capabilities create an important distinction in how its results should be compared to other models
  • The benchmark provides a standardized way to measure progress in AI reasoning capabilities

Looking ahead: While early results show promise, major challenges remain in advancing AI reasoning capabilities to human-competitive levels.

  • The rapid improvement rate raises questions about how quickly AI models might approach higher accuracy levels
  • The 50% accuracy threshold remains a significant milestone yet to be achieved
  • The benchmark continues to serve as a critical tool for measuring progress in AI development

Future implications: The dramatic improvement in such a short timeframe suggests we may need to reassess expectations about the pace of AI advancement in complex reasoning tasks, while maintaining realistic perspectives about current limitations.

OpenAI's Deep Research smashes records for the world's hardest AI exam, with ChatGPT o3-mini and DeepSeek left in its wake

Recent News

Ooh La La and AI: Global regulation talks continue at French summit

Representatives from 60 nations gather in Paris to find common ground on AI safety measures as global tech competition intensifies.

Marvel rebukes AI concerns in Fantastic Four poster

Studio denies use of artificial intelligence after fans spot anatomical errors in superhero film's promotional artwork.

Zoho enhances AI offerings with new Zia tools and marketplace

Zoho has expanded its AI capabilities by introducing Zia Agents and more, enabling businesses to build, customize, and deploy intelligent digital agents using no-code/low-code tools.