Old school LexisNexis is leveraging a strategic multi-model approach for its AI assistant Protégé, combining large language models with smaller, more efficient alternatives to create a customizable legal tool. Rather than relying exclusively on resource-intensive large language models (LLMs), the company selectively uses smaller models and distillation techniques to optimize performance and reduce costs while maintaining high-quality results for specific legal workflows.
The big picture: LexisNexis designed Protégé to assist legal professionals by combining the power of large language models from Anthropic and Mistral with smaller, task-specific models that can be tailored to individual law firms’ workflows.
- The company employs a “best model for the task” philosophy, using smaller language models or distilled versions of larger ones when they provide optimal results with faster response times.
- This hybrid approach reflects a growing industry trend where organizations balance the capabilities of LLMs with the efficiency advantages of smaller, specialized models.
Key functionalities: Protégé streamlines common legal tasks that would typically fall to paralegals or associates within law firms.
- The system can draft legal briefs and complaints grounded in a firm’s existing documents, suggest workflow next steps, and generate prompts to refine searches.
- It assists with critical verification tasks like linking quotes in filings for accuracy checks and summarizing complex legal documents.
- The tool can also help prepare for case development by drafting deposition questions and creating timelines.
Why this matters: LexisNexis is positioning Protégé as the first step toward creating truly personalized AI assistance in the legal profession.
- “Our vision is that every legal professional will have a personal assistant to help them do their job based on what they do, not what other lawyers do,” explained Jeff Riehl, CTO of LexisNexis Legal and Professional.
- This approach demonstrates how industry-specific AI solutions are evolving beyond generic capabilities toward deeply customized tools that understand specialized workflows.
Technical approach: The company’s multi-model strategy combines large and small language models to optimize for both capability and efficiency.
- For simple tasks like chatbots or basic code completion, smaller language models (SLMs) often provide sufficient performance with lower computational requirements.
- The company also employs distillation techniques, where larger models effectively “teach” smaller models, creating more efficient versions that retain much of the original capability.
Between the lines: Protégé represents an emerging trend where AI implementation prioritizes customization and efficiency over raw model size, suggesting that the future of professional AI tools may favor specialized, streamlined solutions over universal models.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...