×
DeepMind’s JumpReLU Architecture Sheds Light on the Inner Workings of Language Models
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

DeepMind has made significant progress in interpreting large language models (LLMs) with the introduction of JumpReLU sparse autoencoder (SAE), a deep learning architecture that decomposes the complex activations of LLMs into smaller, more understandable components.

The challenge of interpreting LLMs: Understanding how the billions of neurons in LLMs work together to process and generate language is extremely difficult due to the complex activation patterns across the network:

  • Individual neurons don’t necessarily correspond to specific concepts, with a single neuron potentially activating for thousands of different concepts, and a single concept activating a broad range of neurons.
  • The massive scale of LLMs, with billions of parameters trained on huge datasets, makes the activation patterns extremely complex and hard to interpret.

Sparse autoencoders as a solution: SAEs aim to compress the dense activations of LLMs into a small number of interpretable intermediate features:

  • SAEs encode LLM activations into a sparse intermediate representation, then decode it back, trying to minimize the difference between original and reconstructed activations while using the fewest possible features.
  • The key challenge is balancing sparsity and reconstruction fidelity – too sparse and important information is lost, not sparse enough and interpretability remains difficult.

JumpReLU SAE architecture: DeepMind’s JumpReLU SAE improves upon previous architectures by using a dynamic activation function that determines separate thresholds for each neuron in the sparse feature vector:

  • This allows JumpReLU to find a better balance between sparsity and reconstruction fidelity compared to other state-of-the-art SAEs like DeepMind’s Gated SAE and OpenAI’s TopK SAE.
  • Experiments on DeepMind’s Gemma 2 9B LLM show JumpReLU minimizes both “dead features” that are never activated and overly active features that fail to provide a signal on specific learned concepts.
  • JumpReLU features are as interpretable as other leading SAEs while being more efficient to train, making it practical for large language models.

Potential applications for understanding and steering LLMs: SAEs can help researchers identify and understand the features LLMs use to process language, enabling techniques to steer their behavior and mitigate issues like bias and toxicity in LLMs:

  • Anthropic used SAEs to find features that activate on specific concepts like the Golden Gate Bridge in their Claude Sonnet model, which could help prevent the generation of harmful content.
  • By manipulating the sparse activations and decoding them back into the model, users could potentially control aspects of the output like tone, readability, or technicality.

Analyzing deeper: While SAEs represent a promising approach to interpreting LLMs, much work remains to be done in this active area of research. Key questions include how well the interpretable features identified by SAEs truly represent the model’s reasoning, how manipulation of these features can be used to reliably control model behavior, and whether SAEs can be effectively scaled up to the largest state-of-the-art LLMs with hundreds of billions of parameters. Nonetheless, DeepMind’s JumpReLU SAE represents an important step forward in the challenging task of peering inside the black box of large language models.

DeepMind makes big jump toward interpreting LLMs with sparse autoencoders

Recent News

xAI launches and then pulls its new AI image generator Aurora

X's brief test of the Aurora image generator demonstrated advanced capabilities in creating photorealistic images but raised familiar concerns about protecting real people's likenesses.

Everything to know about the iOS 18.2 update

Apple's major iOS update will limit advanced AI features to newer iPhone models, signaling a gradual approach to artificial intelligence integration.

Humanitarian groups embrace AI to boost global impact

Aid organizations are testing AI tools to provide verified information, education, and climate forecasts to hundreds of millions affected by unprecedented humanitarian crises.