×
MIT’s Thermometer Method Efficiently Calibrates Large Language Models
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a new method called Thermometer to efficiently calibrate large language models (LLMs) and improve their reliability across diverse tasks.

Key Takeaways: Thermometer addresses the challenges of calibrating LLMs, which can generate inaccurate responses and exhibit misaligned confidence levels:

  • The method involves building a smaller, auxiliary model that runs on top of an LLM to calibrate it, enabling better-calibrated responses on unseen tasks.
  • Thermometer is more efficient than other approaches, requiring less power-hungry computation while preserving the model’s accuracy.

Understanding the Problem: LLMs can be applied to a wide range of tasks, but traditional calibration methods are ineffective due to the models’ versatility:

  • Calibrating an LLM for one task using a traditional method might hurt its performance on another task.
  • Sampling from the model multiple times to obtain better-calibrated confidence is computationally expensive due to the billions of parameters in LLMs.

How Thermometer Works: The researchers leveraged a classical calibration method called temperature scaling to efficiently calibrate LLMs for new tasks:

  • Instead of using a labeled dataset, they train an auxiliary model that runs on top of an LLM to automatically predict the temperature needed for calibration.
  • The Thermometer model is trained on labeled datasets of a few representative tasks, enabling it to generalize to new tasks in a similar category without additional labeled data.

Broader Implications: Thermometer could help users identify situations where an LLM is overconfident about false predictions, preventing deployment in scenarios where it may fail:

  • By providing a clear signal about a model’s accuracy and uncertainty, Thermometer enhances the reliability of LLMs across diverse applications.
  • The method’s efficiency and ability to generalize to new tasks without labeled data make it a promising solution for the challenges of calibrating large language models.
Method prevents an AI model from being overconfident about wrong answers

Recent News

Watch out, Google — Perplexity’s new Sonar API enables real-time AI search

The startup's real-time search technology combines current web data with competitive pricing to challenge established AI search providers.

AI agents are coming for higher education — here are the trends to watch

Universities are deploying AI agents to handle recruitment calls and administrative work, helping address staff shortages while raising questions about automation in education.

OpenAI dramatically increases lobbying spend to shape AI policy

AI firm ramps up Washington presence as lawmakers consider sweeping oversight of artificial intelligence sector.