×
AI leaderboard bias against open models, Big Tech favoritism revealed by researchers
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A new study claims that LM Arena, a popular AI model ranking platform, employs practices that unfairly favor large tech companies whose models rank near the top. The research highlights how proprietary AI systems from companies like Google and Meta gain advantages through extensive pre-release testing options that aren’t equally available to open-source models—raising important questions about the metrics and platforms the AI industry relies on to evaluate genuine progress.

The big picture: Researchers from Cohere Labs, Princeton, and MIT found that LM Arena allows major tech companies to test multiple versions of their AI models before publicly releasing only the highest-performing versions.

  • Meta reportedly tested 27 different versions of Llama-4 before selecting the specific version that appeared on the public leaderboard.
  • Google similarly tested 10 variants of its Gemini and Gemma models between January and March 2025.

Why this matters: LM Arena’s rankings have gained significant industry influence, with companies like Google highlighting their performance on the platform when releasing new models.

  • DeepSeek‘s strong performance in the Chatbot Arena earlier this year helped elevate its status in the competitive LLM landscape.
  • The testing advantage creates an uneven playing field where proprietary models can cherry-pick their best performers while open models lack similar opportunities.

Key details: LM Arena works by having users compare outputs from two unidentified AI models and vote on which they prefer, with results aggregated into a public leaderboard.

  • The platform originated as a research project at the University of California, Berkeley in 2023.
  • Google and OpenAI together account for over 34 percent of all model data collected on the platform.

What they’re saying: LM Arena has responded that their pre-release testing features were not kept secret.

  • The platform operators indicated they will work to improve their sampling algorithm to create more variety in model comparisons.

Researchers’ recommendations: The study suggests several remedies to make the LM Arena platform more equitable.

  • Limiting how many models a company can add and withdraw before finalizing a release.
  • Displaying results for all model versions, not just the final ones.
  • Implementing fair sampling algorithms to ensure open models appear at comparable rates to commercial ones.
Researchers claim LM Arena’s AI leaderboard is biased against open models

Recent News

The US faces new rivals in the global AI talent game

Declining American appeal in AI recruitment coincides with rising capabilities in China, London, and the Gulf States as technical expertise becomes more globally distributed.

AI works through math, not consciousness

Behind the human-like responses of modern chatbots lies sophisticated pattern recognition based on statistical probabilities, not consciousness or understanding.

“Impact misalignment” explains why AI feels so off

AI systems are increasingly optimizing for easily measurable proxy values rather than users' authentic goals, creating subtle but significant disconnects in how technology serves human needs.