×
Meta AI’s “System 2 Distillation” Represents an Efficiency Breakthrough for LLMs
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Meta AI researchers are advancing a new technique for Large Language Models (LLMs) called “System 2 distillation,” which improves the reasoning capabilities of these models without requiring intermediate steps. The finding holds implications for making models faster and more computationally efficient.

System 1 and System 2 thinking in cognitive science and LLMs: The article draws a parallel between the two modes of thinking in humans – fast and intuitive System 1, and slow and analytical System 2 – and how they relate to LLMs:

  • LLMs are usually considered analogous to System 1 thinking, as they can generate text quickly but struggle with tasks requiring deliberate reasoning and planning.
  • AI researchers have shown that LLMs can mimic System 2 thinking by prompting them to generate intermediate reasoning steps before providing their final answer, leading to more accurate results for logical reasoning tasks.

Introducing System 2 distillation: Meta AI researchers have developed a technique called “System 2 distillation” that teaches LLMs complex tasks without requiring intermediate steps:

  • The process involves prompting the LLM to solve a problem using System 2 techniques, verifying the responses for correctness, discarding the intermediate steps, and fine-tuning the model on the initial question and answer.
  • This allows the model to skip the reasoning steps and jump straight to the answer, making the process faster and less computationally expensive.

Evaluating System 2 distillation: The researchers evaluated their method on various reasoning tasks and System 2 prompting techniques using the Llama-2-70B model:

  • The results show that System 2 distillation can significantly improve the performance of LLMs on complex reasoning tasks, often matching or exceeding the accuracy of the original System 2 methods while generating responses much faster and with less compute.
  • However, the researchers found that LLMs can’t distill all types of reasoning skills into their fast-paced inference mechanism, suggesting that some tasks might always require deliberate reasoning.

Looking ahead: While more research is needed to fully understand the potential and limitations of System 2 distillation, the technique is expected to be a powerful optimization tool for mature LLM pipelines that perform specific tasks at each step:

  • Future systems that can distill useful tasks will have more time to spend on reasoning about the tasks they cannot yet do well, just as humans do.
  • Distillation will likely play a significant role in making LLMs more efficient and effective in handling complex reasoning tasks.
Meta researchers distill System 2 thinking into LLMs, improving performance on complex reasoning

Recent News

AI agents and the rise of Hybrid Organizations

Meta makes its improved AI image generator free to use while adding visible watermarks and daily limits to prevent misuse.

Adobe partnership brings AI creativity tools to Box’s content management platform

Box users can now access Adobe's AI-powered editing tools directly within their secure storage environment, eliminating the need to download files or switch between platforms.

Nvidia’s new ACE platform aims to bring more AI to games, but not everyone’s sold

Gaming companies are racing to integrate AI features into mainstream titles, but high hardware requirements and artificial interactions may limit near-term adoption.