OpenAI and Google DeepMind’s latest AI models earned unofficial gold medals at this year’s International Math Olympiad, solving five of six complex problems that challenged 110 high school students from around the world. While AI companies celebrated these results as breakthrough achievements, mathematicians remain skeptical about whether these successes translate to real mathematical research capabilities.
Why mathematicians aren’t impressed: The AI models’ olympiad performance doesn’t reflect the demands of professional mathematical research, where problems can take years or decades to solve rather than hours.
- Emily Riehl, a mathematics professor, notes that olympiad problems differ significantly from frontier mathematical research questions that can take “nine years, rather than nine hours, to solve.”
- Kevin Buzzard, a mathematics professor at Imperial College London, emphasized that even IMO gold medalists arrive at university “in no position to help any of the research mathematicians there.”
- Mathematical research expertise can take “more than one lifespan to acquire.”
The testing methodology concerns: Several experts questioned whether the AI results represent fair comparisons to human performance.
- The models employed a “best-of-n” strategy, generating multiple solutions and selecting the strongest—equivalent to having several students collaborate and submit only their best answer.
- IMO president Gregor Dolinar stated the organization “cannot validate the methods used by the AI models, including the amount of compute used or whether there was any human involvement.”
- Terence Tao, a UCLA mathematician and IMO gold medalist, noted that AI capabilities depend heavily on testing methodology.
Where AI falls short in advanced math: Current language models struggle with cutting-edge mathematical concepts, often making subtle but critical errors.
- Riehl describes “vibe proving”—chatting with AI about mathematical conjectures—where models provide “clearly articulated arguments” that are correct for standard topics but “subtly wrong at the cutting edge.”
- Every model she tested made the same mistake about idempotent theory in weak infinite-dimensional categories, something human experts know to be false.
- Since large language models fundamentally predict text based on training data, Riehl says she’ll “never trust an LLM to provide a mathematical proof that I can’t verify myself.”
The promise of formal verification: Proof assistants—non-AI software programs that verify mathematical logic—offer a more reliable path forward for AI-assisted mathematics.
- These tools can automatically check whether logical arguments prove stated claims, requiring human mathematicians only to understand the meaning of technical terms.
- Some AI olympiad contestants, including startup Harmonic and ByteDance, submitted answers in Lean proof assistant language for automatic error checking.
- Formal proofs provide unique trustworthiness compared to “reasoning” models that can “produce an argument that sounds logical but isn’t.”
What they’re saying: Researchers emphasized the gap between test performance and practical mathematical capability.
- Sébastien Bubeck of OpenAI called the results a “moon landing moment,” while other mathematicians urged caution about the hype.
- “When mathematical accuracy matters, we should demand that AI-generated proofs are formally verifiable,” Riehl concluded.
The bottom line: While AI models showed impressive olympiad performance, the mathematical community sees this as just one test rather than evidence of research-level capability, highlighting the ongoing need for human expertise in advancing mathematical knowledge.
Mathematicians Question AI Performance at International Math Olympiad