×
Lawyers risk dismissal over AI-fabricated cases, scandalized firm warns
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The discovery of AI-generated fake legal citations has sent shockwaves through the legal community, particularly after a Morgan & Morgan attorney cited non-existent cases in a Walmart lawsuit. Law firms are now grappling with how to safely integrate AI tools while preventing hallucinated content from contaminating legal proceedings.

The incident at hand: One of Morgan & Morgan’s attorneys, Rudwin Ayala, included eight fabricated case citations generated by ChatGPT in court documents filed against Walmart.

  • The firm swiftly removed Ayala from the case, replacing him with supervisor T. Michael Morgan
  • Morgan & Morgan agreed to cover Walmart’s fees and expenses related to the erroneous filing
  • The firm maintains that no other employees were aware of the AI-generated content

Broader industry context: The legal sector has witnessed multiple instances of AI hallucinations infiltrating court documents over the past two years.

  • Reuters has identified at least seven cases where lawyers inappropriately cited AI-generated fake cases
  • Several attorneys have faced sanctions for submitting artificially created citations
  • T. Michael Morgan described the potential incorporation of fake cases into common law as a “nauseatingly frightening thought”

Preventive measures: Morgan & Morgan has implemented new safeguards to prevent future AI-related mishaps.

  • The firm introduced mandatory acknowledgment of AI hallucination risks before attorneys can access their AI platform
  • Enhanced training programs are being developed to improve AI literacy among legal staff
  • Clear warnings have been issued that failing to verify AI outputs could result in sanctions, disciplinary action, or termination

Evolving professional standards: The legal industry is adapting its practices to address the challenges posed by AI integration.

  • Legal experts emphasize the importance of AI literacy for attorneys
  • Law firms must balance the efficiency benefits of AI tools with the need for thorough verification
  • Professionals must develop a deep understanding of both AI capabilities and limitations

Looking ahead – Navigating the AI frontier: The Morgan & Morgan incident serves as a crucial wake-up call for the legal profession, highlighting the urgent need to develop robust protocols for AI use in legal practice while maintaining the integrity of the justice system. The challenge lies not in avoiding AI altogether, but in creating effective guardrails that prevent AI hallucinations from compromising legal proceedings.

AI making up cases can get lawyers fired, scandalized law firm warns

Recent News

New framework prevents AI agents from taking unsafe actions in enterprise settings

The framework provides runtime guardrails that intercept unsafe AI agent actions while preserving core functionality, addressing a key barrier to enterprise adoption.

Leaked database reveals China’s AI-powered censorship system targeting political content

The leaked database exposes how China is using advanced language models to automatically identify and censor indirect references to politically sensitive topics beyond traditional keyword filtering.

Study: Anthropic uncovers neural circuits behind AI hallucinations

Anthropic researchers have identified specific neural pathways that determine when AI models fabricate information versus admitting uncertainty, offering new insights into the mechanics behind artificial intelligence hallucinations.