×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

DeepMind introduces Gemma Scope, a new toolset for understanding the inner workings of large language models and addressing interpretability challenges, with the potential to enable more robust and transparent AI systems.

Interpreting LLM activations is crucial but challenging: Understanding the decision-making process of large language models (LLMs) is essential for their safe and transparent deployment in critical applications. However, interpreting the billions of neuron activations generated during LLM inferences is a major challenge.

  • LLMs process inputs through a complex network of artificial neurons, and the values emitted by these neurons, known as “activations,” guide the model’s response and represent its understanding of the input.
  • Each concept can trigger millions of activations across different LLM layers, and each neuron might activate across various concepts, making interpretation difficult.

Sparse autoencoders (SAEs) help interpret LLM activations: SAEs are models that can compress the dense activations of LLMs into a more interpretable form, making it easier to understand which input features activate different parts of the model.

  • SAEs are trained on the activations of a layer in a deep learning model, learning to represent the input activations with a smaller set of features and then reconstruct the original activations from these features.
  • Previous research on SAEs mostly focused on studying tiny language models or a single layer in larger models, limiting their effectiveness in providing a comprehensive understanding of LLM decision-making.

Gemma Scope takes a comprehensive approach to LLM interpretability: DeepMind’s Gemma Scope provides SAEs for every layer and sublayer of its Gemma 2 2B and 9B models, enabling researchers to study how different features evolve and interact across the entire LLM.

  • Gemma Scope comprises more than 400 SAEs, collectively representing over 30 million learned features from the Gemma 2 models.
  • The toolset uses DeepMind’s new JumpReLU SAE architecture, which enables the SAE to learn a different activation threshold for each feature, making it easier to detect features and estimate their strength while keeping sparsity low and increasing reconstruction fidelity.

Broader implications for AI transparency and robustness: By making Gemma Scope publicly available on Hugging Face, DeepMind is encouraging researchers to further explore and develop techniques for understanding the inner workings of LLMs, which could lead to more transparent and robust AI systems.

  • Improved interpretability of LLMs is crucial for their safe deployment in critical applications that have a low tolerance for mistakes and require transparency.
  • Tools like Gemma Scope can help researchers gain insights into how LLMs process information and make decisions, enabling the development of more reliable and explainable AI systems.
  • As LLMs continue to advance and find applications in various domains, the ability to understand and interpret their decision-making processes will be essential for fostering trust and accountability in AI-driven systems.
DeepMind’s Gemma Scope peers under the hood of large language models

Recent News

AI Anchors are Protecting Venezuelan Journalists from Government Crackdowns

Venezuelan news outlets deploy AI-generated anchors to protect human journalists from government retaliation while disseminating news via social media.

How AI and Robotics are Being Integrated into Sex Tech

The integration of AI and robotics into sexual experiences raises questions about the future of human intimacy and relationships.

63% of Brands Now Embrace Gen AI in Marketing, Research Shows

Marketers embrace generative AI despite legal and ethical concerns, with 63% of brands already using the technology in their campaigns.