×
OpenAI’s Deep Research AI model sets new record on industry’s hardest benchmark
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI’s Deep Research tool has achieved a record-breaking 26.6% accuracy score on Humanity’s Last Exam, marking a significant improvement in AI performance on complex reasoning tasks.

Key breakthrough: OpenAI‘s Deep Research has set a new performance record on Humanity’s Last Exam, a benchmark designed to test AI systems with some of the most challenging reasoning problems available.

  • The tool achieved 26.6% accuracy, representing a 183% improvement in less than two weeks
  • OpenAI’s ChatGPT o3-mini scored 10.5% accuracy at standard settings and 13% at high-capacity settings
  • DeepSeek R1, the previous leader, had achieved 9.4% accuracy on text-only evaluation

Technical context: Humanity’s Last Exam serves as a comprehensive benchmark for testing advanced AI capabilities and reasoning abilities.

  • The exam consists of extremely complex problems that challenge even human experts
  • The benchmark includes both general knowledge questions and complex reasoning problems
  • Current scores demonstrate both significant progress and substantial room for improvement in AI capabilities

Performance analysis: Deep Research’s superior performance comes with important caveats that affect result interpretation.

  • The tool’s web search capabilities give it an advantage over other AI models
  • Despite the dramatic improvement, a 26.6% accuracy rate would still be considered a failing grade by traditional educational standards
  • The rapid rate of improvement suggests potential for continued advancement in AI reasoning capabilities

Model comparison: The benchmark reveals significant performance variations among leading AI models.

  • ChatGPT o3-mini shows differential performance based on capacity settings
  • Deep Research’s search capabilities create an important distinction in how its results should be compared to other models
  • The benchmark provides a standardized way to measure progress in AI reasoning capabilities

Looking ahead: While early results show promise, major challenges remain in advancing AI reasoning capabilities to human-competitive levels.

  • The rapid improvement rate raises questions about how quickly AI models might approach higher accuracy levels
  • The 50% accuracy threshold remains a significant milestone yet to be achieved
  • The benchmark continues to serve as a critical tool for measuring progress in AI development

Future implications: The dramatic improvement in such a short timeframe suggests we may need to reassess expectations about the pace of AI advancement in complex reasoning tasks, while maintaining realistic perspectives about current limitations.

OpenAI's Deep Research smashes records for the world's hardest AI exam, with ChatGPT o3-mini and DeepSeek left in its wake

Recent News

DeepSeek may be not only cheaper but better for the environment among other perks

Chinese startup achieves GPT-4 level performance with one-tenth the resources typically required to build advanced AI models.

Google cites competition, national security in willingness to make AI weapons

Previously skeptical Silicon Valley giant removes AI weapons restrictions to compete for defense contracts, marking shift in tech-military relations.

Nokia expands 5G network in France with 4-year Orange deal

Nokia will deploy energy-efficient 5G equipment across southeastern and western France as Orange modernizes its mobile network over the next four years.