Actually, human, stick around for a minute, could ya?
The evolution of AI evaluation is shifting from automated benchmarks to human assessment, signaling a new era in how we measure AI capabilities. As traditional accuracy tests like GLUE, MMLU, and “Humanity’s Last Exam” become increasingly inadequate for measuring the true value of generative AI, researchers and companies are turning to human judgment to evaluate AI systems in ways that better reflect real-world applications and needs.
The big picture: Traditional AI benchmarks have become saturated as models routinely achieve near-perfect scores without necessarily demonstrating real-world usefulness.
- “We’ve saturated the benchmarks,” acknowledged Michael Gerstenhaber, head of API technologies at Anthropic, during a November Bloomberg Conference on AI.
- Researchers publishing in The New England Journal of Medicine this week argued that “When it comes to benchmarks, humans are the only way.”
Why this matters: As AI capabilities expand, how we evaluate these systems directly impacts their development trajectory and practical applications.
- Medical AI exemplifies this challenge, where models easily ace traditional exams like MIT’s MedQA but may fail to capture what matters in actual clinical practice.
Historical context: Human feedback has been integral to AI development, but its role is expanding beyond just training.
- ChatGPT‘s 2022 development relied heavily on “reinforcement learning by human feedback” as a training methodology.
- Now, human evaluation is becoming central to how companies demonstrate their models’ capabilities and superiority.
Industry trends: Major AI developers are increasingly highlighting human evaluations in their product launches.
- Google emphasized human evaluator ratings when unveiling its open-source Gemma 3 model this month.
- OpenAI similarly highlighted human reviewer feedback when rolling out its latest GPT-4.5 model.
Looking ahead: Even new benchmark designs are incorporating human participation as a fundamental component.
- François Chollet, creator of the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI), conducted a live study with over 400 members of the public to calibrate difficulty levels that make sense to humans.
- This integration of human assessment suggests significant room for expansion in AI training, development, and testing with greater human involvement.
With AI models clobbering every benchmark, it's time for human evaluation