back
Get SIGNAL/NOISE in your inbox daily

The legal profession is confronting the real-world consequences of AI hallucination as recent graduates face career setbacks from overreliance on chatbots. A case in Utah has highlighted the dangerous intersection of legal practice and AI tools, where fake citations in court filings led to sanctions, firing, and a pointed judicial warning about AI’s limitations. This incident demonstrates how professional standards are evolving in response to AI adoption, with courts and firms establishing new guardrails to protect both the justice system and vulnerable professionals.

The big picture: A recent law school graduate lost his job after including AI-hallucinated legal citations in a court filing, marking the first fake citation case discovered in Utah’s legal system.

  • Judge Mark Kouris ordered sanctions after finding multiple mis-cited cases and at least one completely fictional legal precedent generated by ChatGPT.
  • The incident highlights the growing tension between convenient AI tools and professional responsibility in highly regulated fields like law.

Key details: The law firm claimed the graduate was working as an unlicensed law clerk who failed to disclose his ChatGPT use when drafting the document.

  • Attorneys Douglas Durbano and Richard Bednar faced judicial scrutiny for submitting the filing without proper verification of its accuracy.
  • The law firm had no AI policy in place at the time but quickly established one after the incident.

What the court said: Judge Kouris emphasized that “every attorney has an ongoing duty to review and ensure the accuracy of their court filings.”

  • The court noted that the attorneys “fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT.”
  • Kouris warned that “the legal profession must be cautious of AI due to its tendency to hallucinate information.”

The consequences: Attorney Bednar was ordered to pay the opposition’s attorneys’ fees and donate $1,000 to “And Justice for All,” a legal aid organization.

  • The law clerk who used ChatGPT was fired despite the absence of formal policies against such AI use.
  • The sanctions were relatively mild because the attorneys quickly accepted responsibility, unlike other lawyers who have denied AI use when caught.

Why this matters: Fake legal citations generate significant harms by wasting court resources, increasing costs for opposing parties, and potentially depriving clients of proper legal representation.

  • The case represents a cautionary tale as professional industries grapple with integrating AI tools while maintaining ethical standards and quality control.

Behind the numbers: The fictional case—”Royer v. Nelson, 2007 UT App 74, 156 P.3d 789″—was easily identifiable as fake when prompted for details, with ChatGPT providing only vague information that should have raised red flags.

The broader context: This incident reflects growing concerns about students and recent graduates becoming overly dependent on AI tools without understanding their limitations.

  • Law firms are now facing the challenge of educating new hires about responsible AI use in professional contexts where accuracy is paramount.
  • Even legal non-profits acknowledge they are “incorporating AI in their services” while emphasizing that “every attorney has a legal and professional responsibility” to ensure accuracy.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...