×
DeepMind’s Gemma Scope Helps Demystify How LLMs Work
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

DeepMind introduces Gemma Scope, a new toolset for understanding the inner workings of large language models and addressing interpretability challenges, with the potential to enable more robust and transparent AI systems.

Interpreting LLM activations is crucial but challenging: Understanding the decision-making process of large language models (LLMs) is essential for their safe and transparent deployment in critical applications. However, interpreting the billions of neuron activations generated during LLM inferences is a major challenge.

  • LLMs process inputs through a complex network of artificial neurons, and the values emitted by these neurons, known as “activations,” guide the model’s response and represent its understanding of the input.
  • Each concept can trigger millions of activations across different LLM layers, and each neuron might activate across various concepts, making interpretation difficult.

Sparse autoencoders (SAEs) help interpret LLM activations: SAEs are models that can compress the dense activations of LLMs into a more interpretable form, making it easier to understand which input features activate different parts of the model.

  • SAEs are trained on the activations of a layer in a deep learning model, learning to represent the input activations with a smaller set of features and then reconstruct the original activations from these features.
  • Previous research on SAEs mostly focused on studying tiny language models or a single layer in larger models, limiting their effectiveness in providing a comprehensive understanding of LLM decision-making.

Gemma Scope takes a comprehensive approach to LLM interpretability: DeepMind’s Gemma Scope provides SAEs for every layer and sublayer of its Gemma 2 2B and 9B models, enabling researchers to study how different features evolve and interact across the entire LLM.

  • Gemma Scope comprises more than 400 SAEs, collectively representing over 30 million learned features from the Gemma 2 models.
  • The toolset uses DeepMind’s new JumpReLU SAE architecture, which enables the SAE to learn a different activation threshold for each feature, making it easier to detect features and estimate their strength while keeping sparsity low and increasing reconstruction fidelity.

Broader implications for AI transparency and robustness: By making Gemma Scope publicly available on Hugging Face, DeepMind is encouraging researchers to further explore and develop techniques for understanding the inner workings of LLMs, which could lead to more transparent and robust AI systems.

  • Improved interpretability of LLMs is crucial for their safe deployment in critical applications that have a low tolerance for mistakes and require transparency.
  • Tools like Gemma Scope can help researchers gain insights into how LLMs process information and make decisions, enabling the development of more reliable and explainable AI systems.
  • As LLMs continue to advance and find applications in various domains, the ability to understand and interpret their decision-making processes will be essential for fostering trust and accountability in AI-driven systems.
DeepMind’s Gemma Scope peers under the hood of large language models

Recent News

Errors in AI Result in Unexpected Scientific Breakthroughs

Patent approvals in the Bay Area hit record highs as officials grapple with new rules for AI-assisted inventions and debate what constitutes a human inventor.

Sakana AI’s new tech is searching for signs of artificial life emerging from simulations

A self-learning AI system discovers complex cellular patterns and behaviors in digital simulations, automating what was previously months of manual scientific observation.

Dating app usage hit record highs in 2024, but even AI isn’t making daters happier

Growth in dating apps driven by older demographics and AI features masks persistent user dissatisfaction with the digital dating experience.