×
DeepMind’s Gemma Scope Helps Demystify How LLMs Work
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

DeepMind introduces Gemma Scope, a new toolset for understanding the inner workings of large language models and addressing interpretability challenges, with the potential to enable more robust and transparent AI systems.

Interpreting LLM activations is crucial but challenging: Understanding the decision-making process of large language models (LLMs) is essential for their safe and transparent deployment in critical applications. However, interpreting the billions of neuron activations generated during LLM inferences is a major challenge.

  • LLMs process inputs through a complex network of artificial neurons, and the values emitted by these neurons, known as “activations,” guide the model’s response and represent its understanding of the input.
  • Each concept can trigger millions of activations across different LLM layers, and each neuron might activate across various concepts, making interpretation difficult.

Sparse autoencoders (SAEs) help interpret LLM activations: SAEs are models that can compress the dense activations of LLMs into a more interpretable form, making it easier to understand which input features activate different parts of the model.

  • SAEs are trained on the activations of a layer in a deep learning model, learning to represent the input activations with a smaller set of features and then reconstruct the original activations from these features.
  • Previous research on SAEs mostly focused on studying tiny language models or a single layer in larger models, limiting their effectiveness in providing a comprehensive understanding of LLM decision-making.

Gemma Scope takes a comprehensive approach to LLM interpretability: DeepMind’s Gemma Scope provides SAEs for every layer and sublayer of its Gemma 2 2B and 9B models, enabling researchers to study how different features evolve and interact across the entire LLM.

  • Gemma Scope comprises more than 400 SAEs, collectively representing over 30 million learned features from the Gemma 2 models.
  • The toolset uses DeepMind’s new JumpReLU SAE architecture, which enables the SAE to learn a different activation threshold for each feature, making it easier to detect features and estimate their strength while keeping sparsity low and increasing reconstruction fidelity.

Broader implications for AI transparency and robustness: By making Gemma Scope publicly available on Hugging Face, DeepMind is encouraging researchers to further explore and develop techniques for understanding the inner workings of LLMs, which could lead to more transparent and robust AI systems.

  • Improved interpretability of LLMs is crucial for their safe deployment in critical applications that have a low tolerance for mistakes and require transparency.
  • Tools like Gemma Scope can help researchers gain insights into how LLMs process information and make decisions, enabling the development of more reliable and explainable AI systems.
  • As LLMs continue to advance and find applications in various domains, the ability to understand and interpret their decision-making processes will be essential for fostering trust and accountability in AI-driven systems.
DeepMind’s Gemma Scope peers under the hood of large language models

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.