×
MIT’s Thermometer Method Efficiently Calibrates Large Language Models
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a new method called Thermometer to efficiently calibrate large language models (LLMs) and improve their reliability across diverse tasks.

Key Takeaways: Thermometer addresses the challenges of calibrating LLMs, which can generate inaccurate responses and exhibit misaligned confidence levels:

  • The method involves building a smaller, auxiliary model that runs on top of an LLM to calibrate it, enabling better-calibrated responses on unseen tasks.
  • Thermometer is more efficient than other approaches, requiring less power-hungry computation while preserving the model’s accuracy.

Understanding the Problem: LLMs can be applied to a wide range of tasks, but traditional calibration methods are ineffective due to the models’ versatility:

  • Calibrating an LLM for one task using a traditional method might hurt its performance on another task.
  • Sampling from the model multiple times to obtain better-calibrated confidence is computationally expensive due to the billions of parameters in LLMs.

How Thermometer Works: The researchers leveraged a classical calibration method called temperature scaling to efficiently calibrate LLMs for new tasks:

  • Instead of using a labeled dataset, they train an auxiliary model that runs on top of an LLM to automatically predict the temperature needed for calibration.
  • The Thermometer model is trained on labeled datasets of a few representative tasks, enabling it to generalize to new tasks in a similar category without additional labeled data.

Broader Implications: Thermometer could help users identify situations where an LLM is overconfident about false predictions, preventing deployment in scenarios where it may fail:

  • By providing a clear signal about a model’s accuracy and uncertainty, Thermometer enhances the reliability of LLMs across diverse applications.
  • The method’s efficiency and ability to generalize to new tasks without labeled data make it a promising solution for the challenges of calibrating large language models.
Method prevents an AI model from being overconfident about wrong answers

Recent News

AI agents and the rise of Hybrid Organizations

Meta makes its improved AI image generator free to use while adding visible watermarks and daily limits to prevent misuse.

Adobe partnership brings AI creativity tools to Box’s content management platform

Box users can now access Adobe's AI-powered editing tools directly within their secure storage environment, eliminating the need to download files or switch between platforms.

Nvidia’s new ACE platform aims to bring more AI to games, but not everyone’s sold

Gaming companies are racing to integrate AI features into mainstream titles, but high hardware requirements and artificial interactions may limit near-term adoption.