×
New research suggests language models aren’t merely memorizing information
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

New research explores how Large Language Models (LLMs) develop and apply reasoning capabilities through their pretraining data, offering insights into how these AI systems learn to solve problems rather than simply retrieving memorized information.

Research overview: Scientists investigated two LLMs of different sizes (7B and 35B parameters) to understand how they utilize pretraining data when solving mathematical reasoning tasks versus answering factual questions.

  • The study analyzed 2.5 billion training tokens to identify which documents influenced model outputs
  • Researchers compared the model’s approach to mathematical reasoning tasks against its handling of factual questions
  • The investigation focused on understanding whether LLMs truly reason or simply retrieve memorized information

Key findings about factual knowledge: The models typically relied on distinct sets of data when answering different factual questions, showing a direct correlation between training data and responses.

  • For factual questions, answers were commonly found within the most influential training documents
  • The model’s approach to factual queries appeared more retrieval-based, drawing directly from specific training examples

Mathematical reasoning insights: The research revealed that LLMs employ a more sophisticated approach to solving mathematical problems than simple fact retrieval.

  • Documents showing similar problem-solving methods often influenced multiple reasoning questions within the same task category
  • The actual answers to reasoning questions rarely appeared in the most influential training documents
  • Intermediate reasoning steps were also typically absent from the highly influential training data

Evidence of procedural learning: The study demonstrated that LLMs develop generalized problem-solving strategies through exposure to procedural examples in their training data.

  • Influential documents often contained demonstrations of solution methods, including formulae and code examples
  • The models appeared to synthesize procedural knowledge from similar reasoning patterns across multiple training examples
  • This finding suggests LLMs can develop genuine problem-solving capabilities rather than relying solely on memorization

Future implications: This research challenges previous assumptions about LLM capabilities and suggests these systems may be capable of more genuine reasoning than previously thought, though further research is needed to fully understand the extent and limitations of these capabilities.

Procedural knowledge in pretraining drives reasoning in large language models

Recent News

Salesforce stock jumps on revenue beat, AI-fueled growth

Salesforce achieves profitable AI expansion as enterprise customers embrace its new AI platform with over 200 major deals in the first week.

Junson Park wins AI Music Award from University of the Arts London

New AI music initiative at University of Arts London seeks to incorporate non-Western musical traditions, starting with Korean classical forms.

OpenAI poaches 3 senior engineers from DeepMind

The competition for elite AI researchers intensifies as OpenAI's new Zurich office poaches three senior computer vision specialists from Google DeepMind.