Old school LexisNexis is leveraging a strategic multi-model approach for its AI assistant Protégé, combining large language models with smaller, more efficient alternatives to create a customizable legal tool. Rather than relying exclusively on resource-intensive large language models (LLMs), the company selectively uses smaller models and distillation techniques to optimize performance and reduce costs while maintaining high-quality results for specific legal workflows.
The big picture: LexisNexis designed Protégé to assist legal professionals by combining the power of large language models from Anthropic and Mistral with smaller, task-specific models that can be tailored to individual law firms’ workflows.
- The company employs a “best model for the task” philosophy, using smaller language models or distilled versions of larger ones when they provide optimal results with faster response times.
- This hybrid approach reflects a growing industry trend where organizations balance the capabilities of LLMs with the efficiency advantages of smaller, specialized models.
Key functionalities: Protégé streamlines common legal tasks that would typically fall to paralegals or associates within law firms.
- The system can draft legal briefs and complaints grounded in a firm’s existing documents, suggest workflow next steps, and generate prompts to refine searches.
- It assists with critical verification tasks like linking quotes in filings for accuracy checks and summarizing complex legal documents.
- The tool can also help prepare for case development by drafting deposition questions and creating timelines.
Why this matters: LexisNexis is positioning Protégé as the first step toward creating truly personalized AI assistance in the legal profession.
- “Our vision is that every legal professional will have a personal assistant to help them do their job based on what they do, not what other lawyers do,” explained Jeff Riehl, CTO of LexisNexis Legal and Professional.
- This approach demonstrates how industry-specific AI solutions are evolving beyond generic capabilities toward deeply customized tools that understand specialized workflows.
Technical approach: The company’s multi-model strategy combines large and small language models to optimize for both capability and efficiency.
- For simple tasks like chatbots or basic code completion, smaller language models (SLMs) often provide sufficient performance with lower computational requirements.
- The company also employs distillation techniques, where larger models effectively “teach” smaller models, creating more efficient versions that retain much of the original capability.
Between the lines: Protégé represents an emerging trend where AI implementation prioritizes customization and efficiency over raw model size, suggesting that the future of professional AI tools may favor specialized, streamlined solutions over universal models.
Small models as paralegals: LexisNexis distills models to build AI assistant