back
Get SIGNAL/NOISE in your inbox daily

A California attorney has been fined $10,000 by the state’s 2nd District Court of Appeal for submitting a legal brief containing 21 fabricated case quotations generated by ChatGPT. This appears to be the largest fine issued by a California court over AI fabrications and comes as legal authorities scramble to regulate AI use in the judiciary, with new guidelines requiring courts to establish AI policies by December 15.

What happened: Los Angeles-area attorney Amir Mostafavi filed a state court appeal in July 2023 that contained 21 fake quotes out of 23 case citations, all generated by ChatGPT.

  • Mostafavi told the court he used ChatGPT to improve his appeal after writing it but didn’t read the AI-generated text before submission.
  • The three-judge panel fined him for filing a frivolous appeal, violating court rules, citing fake cases, and wasting court time and taxpayer money.
  • The court published a blistering opinion warning that “no brief, pleading, motion, or any other paper filed in any court should contain any citations—whether provided by generative AI or any other source—that the attorney responsible for submitting the pleading has not personally read and verified.”

Why this matters: Legal experts are tracking an exponential rise in cases where attorneys cite AI-generated fake legal authority, with over 600 cases nationwide and 52 in California alone.

  • Damien Charlotin, who teaches AI and law at a business school in Paris and tracks these cases globally, reports seeing “a few cases a day” now compared to “a few cases a month” when he started earlier this year.
  • Stanford University’s RegLab found that while three out of four lawyers plan to use generative AI, some AI forms generate hallucinations in one out of three queries.
  • Jenny Wondracek, who leads a tracking project, has documented three instances of judges citing fake legal authority in their decisions.

The regulatory response: California’s legal system is rapidly implementing new AI oversight measures.

  • The state’s Judicial Council issued guidelines two weeks ago requiring judges and court staff to either ban generative AI or adopt AI use policies by December 15.
  • The California Bar Association is considering strengthening its code of conduct to account for AI use following a request by the California Supreme Court.
  • Other states are implementing approaches including temporary suspensions and requiring caught attorneys to take ethics courses.

What they’re saying: Mostafavi acknowledged the risks while defending AI’s utility in legal practice.

  • “In the meantime we’re going to have some victims, we’re going to have some damages, we’re going to have some wreckages,” he said. “I hope this example will help others not fall into the hole. I’m paying the price.”
  • Mark McKenna from UCLA’s Institute of Technology, Law & Policy called the fine appropriate punishment for “an abdication of your responsibility as a party representing someone.”
  • UCLA law professor Andrew Selbst noted the pressure on legal professionals: “This is getting shoved down all our throats. It’s being pushed in firms and schools and a lot of places and we have not yet grappled with the consequences of that.”

The bigger picture: The problem stems from AI models’ tendency to “hallucinate”—generate convincing but fake information—particularly when supporting facts don’t exist.

  • “The harder your legal argument is to make, the more the model will tend to hallucinate, because they will try to please you,” Charlotin explained.
  • Experts expect the issue to worsen before improving due to rushed AI adoption without proper training on ethical use.
  • Many lawyers still don’t understand that AI fabricates information or believe legal tech tools can eliminate all false material generated by language models.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...