×
MIT’s Thermometer Method Efficiently Calibrates Large Language Models
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a new method called Thermometer to efficiently calibrate large language models (LLMs) and improve their reliability across diverse tasks.

Key Takeaways: Thermometer addresses the challenges of calibrating LLMs, which can generate inaccurate responses and exhibit misaligned confidence levels:

  • The method involves building a smaller, auxiliary model that runs on top of an LLM to calibrate it, enabling better-calibrated responses on unseen tasks.
  • Thermometer is more efficient than other approaches, requiring less power-hungry computation while preserving the model’s accuracy.

Understanding the Problem: LLMs can be applied to a wide range of tasks, but traditional calibration methods are ineffective due to the models’ versatility:

  • Calibrating an LLM for one task using a traditional method might hurt its performance on another task.
  • Sampling from the model multiple times to obtain better-calibrated confidence is computationally expensive due to the billions of parameters in LLMs.

How Thermometer Works: The researchers leveraged a classical calibration method called temperature scaling to efficiently calibrate LLMs for new tasks:

  • Instead of using a labeled dataset, they train an auxiliary model that runs on top of an LLM to automatically predict the temperature needed for calibration.
  • The Thermometer model is trained on labeled datasets of a few representative tasks, enabling it to generalize to new tasks in a similar category without additional labeled data.

Broader Implications: Thermometer could help users identify situations where an LLM is overconfident about false predictions, preventing deployment in scenarios where it may fail:

  • By providing a clear signal about a model’s accuracy and uncertainty, Thermometer enhances the reliability of LLMs across diverse applications.
  • The method’s efficiency and ability to generalize to new tasks without labeled data make it a promising solution for the challenges of calibrating large language models.
Method prevents an AI model from being overconfident about wrong answers

Recent News

Trump pledges to reverse Biden’s AI policies amid global safety talks

Trump's vow to dismantle AI safeguards collides with the tech industry's growing acceptance of federal oversight and international safety standards.

AI predicts behavior of 1000 people in simulation study

Stanford researchers demonstrate AI models can now accurately mimic human decision-making patterns across large populations, marking a significant shift from traditional survey methods.

Strava limits third-party access to user fitness data

Popular workout-tracking platform restricts third-party access to user data, forcing fitness apps to find alternative data sources or scale back social features.