×
Microsoft unveils framework for data-enhanced AI apps
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A new framework for categorizing RAG tasks: Microsoft researchers have proposed a four-level framework for categorizing retrieval-augmented generation (RAG) tasks for large language models (LLMs), based on the complexity of external data retrieval and reasoning required.

  • The framework aims to help enterprises make informed decisions about integrating external knowledge into LLMs and understanding when more complex systems may be necessary.
  • The categorization ranges from simple explicit fact retrieval to complex hidden rationale queries requiring domain-specific reasoning.
  • This approach recognizes the varying levels of sophistication needed for different types of user queries and LLM applications.

Breaking down the four-level categorization: The proposed framework classifies user queries into four distinct levels, each representing an increasing level of complexity in terms of data retrieval and reasoning requirements.

  • Level 1: Explicit facts – Queries that require retrieval of explicitly stated facts from data sources.
  • Level 2: Implicit facts – Queries necessitating inference of information not explicitly stated, involving basic reasoning.
  • Level 3: Interpretable rationales – Queries demanding understanding and application of domain-specific rules explicitly provided in external resources.
  • Level 4: Hidden rationales – The most complex queries, requiring the uncovering and leveraging of implicit domain-specific reasoning methods not explicitly described in the data.

Explicit fact queries: The foundation of RAG: Explicit fact queries represent the simplest form of RAG tasks, focusing on retrieving factual information directly stated in the available data.

  • These queries utilize basic RAG techniques to access and present information.
  • Challenges in this category include dealing with unstructured datasets and multi-modal elements.
  • Solutions for handling explicit fact queries often involve multi-modal document parsing and embedding models to enhance retrieval accuracy.

Implicit fact queries: A step up in complexity: Implicit fact queries require LLMs to go beyond simple retrieval, engaging in basic reasoning and deduction to infer information not explicitly stated in the data.

  • This category often involves “multi-hop question answering,” where multiple pieces of information must be connected to derive an answer.
  • Advanced RAG techniques such as IRCoT (Iterative Refinement Chain-of-Thought) and RAT (Retrieval-Augmented Thinking) are employed to handle these queries.
  • Knowledge graphs combined with LLMs can be particularly effective in addressing implicit fact queries.

Interpretable rationale queries: Applying domain-specific rules: Interpretable rationale queries represent a significant jump in complexity, requiring LLMs to understand and apply domain-specific rules that are not part of their pre-training data.

  • These queries often necessitate the use of prompt tuning and chain-of-thought reasoning techniques.
  • Approaches like Automate-CoT (Automated Chain-of-Thought) can be employed to enhance the LLM’s ability to handle interpretable rationale queries.
  • This category highlights the importance of integrating external, domain-specific knowledge into LLM systems.

Hidden rationale queries: The pinnacle of RAG complexity: Hidden rationale queries present the most significant challenge in the RAG framework, involving domain-specific reasoning that is not explicitly stated in the available data.

  • These queries require LLMs to analyze data, extract patterns, and apply this knowledge to new situations.
  • Addressing hidden rationale queries often necessitates domain-specific fine-tuning of LLMs.
  • This category underscores the limitations of general-purpose LLMs and the need for specialized approaches in certain domains.

Implications for enterprise LLM integration: The proposed framework offers valuable insights for organizations looking to leverage LLMs and RAG technologies in their operations.

  • By understanding the different levels of query complexity, enterprises can better assess their specific needs and choose appropriate LLM solutions.
  • The framework highlights the importance of recognizing when more complex systems or specialized approaches may be necessary, rather than relying solely on general-purpose LLMs.
  • It also emphasizes the ongoing need for research and development in advanced RAG techniques to address increasingly complex query types.
Microsoft researchers propose framework for building data-augmented LLM applications

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.