×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a new method called Thermometer to efficiently calibrate large language models (LLMs) and improve their reliability across diverse tasks.

Key Takeaways: Thermometer addresses the challenges of calibrating LLMs, which can generate inaccurate responses and exhibit misaligned confidence levels:

  • The method involves building a smaller, auxiliary model that runs on top of an LLM to calibrate it, enabling better-calibrated responses on unseen tasks.
  • Thermometer is more efficient than other approaches, requiring less power-hungry computation while preserving the model’s accuracy.

Understanding the Problem: LLMs can be applied to a wide range of tasks, but traditional calibration methods are ineffective due to the models’ versatility:

  • Calibrating an LLM for one task using a traditional method might hurt its performance on another task.
  • Sampling from the model multiple times to obtain better-calibrated confidence is computationally expensive due to the billions of parameters in LLMs.

How Thermometer Works: The researchers leveraged a classical calibration method called temperature scaling to efficiently calibrate LLMs for new tasks:

  • Instead of using a labeled dataset, they train an auxiliary model that runs on top of an LLM to automatically predict the temperature needed for calibration.
  • The Thermometer model is trained on labeled datasets of a few representative tasks, enabling it to generalize to new tasks in a similar category without additional labeled data.

Broader Implications: Thermometer could help users identify situations where an LLM is overconfident about false predictions, preventing deployment in scenarios where it may fail:

  • By providing a clear signal about a model’s accuracy and uncertainty, Thermometer enhances the reliability of LLMs across diverse applications.
  • The method’s efficiency and ability to generalize to new tasks without labeled data make it a promising solution for the challenges of calibrating large language models.
Method prevents an AI model from being overconfident about wrong answers

Recent News

New YouTube Feature Lets You AI-Generate Thumbnails for Playlists

The new feature automates playlist thumbnail creation while limiting user customization options to preset AI-generated themes.

This AI-Powered Social Network Eliminates Human Interaction

A new Twitter-like platform replaces human interactions with AI chatbots, aiming to reduce social media anxiety.

Library of Congress Is a Go-To Data Source for Companies Training AI Models

The Library's vast digital archives attract AI companies seeking diverse, copyright-free data to train language models.