×
Contextual AI Releases ‘RAG 2.0’ Feature to Boost Enterprise AI
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Revolutionizing Enterprise AI: Contextual AI, a Silicon Valley startup, has developed RAG 2.0, an advanced platform that significantly improves retrieval-augmented generation (RAG) for enterprise applications.

The genesis of RAG: Douwe Kiela, CEO of Contextual AI, recognized the limitations of large language models (LLMs) early on, particularly their inability to access real-time data efficiently.

  • In 2020, Kiela and his team at Facebook published a seminal paper introducing RAG, a method for updating foundation models with new information.
  • RAG allows LLMs to access data beyond their initial training, making them more accurate and relevant for enterprise use.

Contextual AI’s breakthrough: The startup’s RAG 2.0 platform offers substantial improvements over existing RAG implementations.

  • The company claims to achieve 10x better parameter accuracy and performance compared to competitors.
  • This optimization allows smaller infrastructure to run models that typically require much larger compute resources.
  • For example, a 70-billion-parameter model could run on infrastructure designed for only 7 billion parameters without sacrificing accuracy.

Technical innovations: RAG 2.0’s performance gains come from closely integrating the retriever and language model architectures.

  • The platform refines retrievers through back propagation, adjusting the underlying neural network algorithms.
  • Instead of training separate networks for retriever and LLM, Contextual AI offers a unified platform that aligns and tunes both through back propagation.
  • This approach leads to improved precision, response quality, and optimization while reducing the likelihood of hallucinated data.

Versatile implementation: RAG 2.0 is designed to work with various open-source language models and accommodate customer preferences.

  • The platform was developed using NVIDIA’s Megatron LM on NVIDIA H100 and A100 Tensor Core GPUs hosted in Google Cloud.
  • It can run in the cloud, on-premises, or fully disconnected, making it suitable for a wide range of industries.

Addressing complex data challenges: Contextual AI employs a “mixture of retrievers” approach to handle diverse data formats.

  • Different retriever types are deployed based on the data format, such as Graph RAG for video files and vector-based RAG for text or PDF formats.
  • A neural reranking algorithm organizes the retrieved data before feeding it to the LLM for answer generation.

Industry impact and future prospects: Contextual AI recently closed an $80 million Series A funding round, including investment from NVIDIA’s NVentures.

  • The company is focusing on high-value, knowledge-intensive use cases that can significantly impact enterprise productivity and cost savings.
  • With plans to double its workforce by year-end, Contextual AI is poised for rapid growth in the competitive AI startup landscape.

Broader implications: As enterprises increasingly adopt AI technologies, Contextual AI’s advancements in RAG could reshape how businesses leverage LLMs for real-world applications.

  • The improved efficiency and accuracy of RAG 2.0 may accelerate the adoption of AI in industries previously hesitant due to performance or infrastructure limitations.
  • However, as with any rapidly evolving technology, it remains to be seen how Contextual AI’s innovations will stack up against future developments in the field and potential competing approaches from established tech giants.
From RAG to Richness: Startup Uplevels Retrieval-Augmented Generation for Enterprises

Recent News

MIT research evaluates driver behavior to advance autonomous driving tech

Researchers find driver trust and behavior patterns are more critical to autonomous vehicle adoption than technical capabilities, with acceptance levels showing first uptick in years.

Inside Microsoft’s plan to ensure every business has an AI Agent

Microsoft's shift toward AI assistants marks its largest interface change since the introduction of Windows, as the company integrates automated helpers across its entire software ecosystem.

Chinese AI model LLaVA-o1 rivals OpenAI’s o1 in new study

New open-source AI model from China matches Silicon Valley's best at visual reasoning tasks while making its code freely available to researchers.