A California attorney has been fined $10,000 by the state’s 2nd District Court of Appeal for submitting a legal brief containing 21 fabricated case quotations generated by ChatGPT. This appears to be the largest fine issued by a California court over AI fabrications and comes as legal authorities scramble to regulate AI use in the judiciary, with new guidelines requiring courts to establish AI policies by December 15.
What happened: Los Angeles-area attorney Amir Mostafavi filed a state court appeal in July 2023 that contained 21 fake quotes out of 23 case citations, all generated by ChatGPT.
- Mostafavi told the court he used ChatGPT to improve his appeal after writing it but didn’t read the AI-generated text before submission.
- The three-judge panel fined him for filing a frivolous appeal, violating court rules, citing fake cases, and wasting court time and taxpayer money.
- The court published a blistering opinion warning that “no brief, pleading, motion, or any other paper filed in any court should contain any citations—whether provided by generative AI or any other source—that the attorney responsible for submitting the pleading has not personally read and verified.”
Why this matters: Legal experts are tracking an exponential rise in cases where attorneys cite AI-generated fake legal authority, with over 600 cases nationwide and 52 in California alone.
- Damien Charlotin, who teaches AI and law at a business school in Paris and tracks these cases globally, reports seeing “a few cases a day” now compared to “a few cases a month” when he started earlier this year.
- Stanford University’s RegLab found that while three out of four lawyers plan to use generative AI, some AI forms generate hallucinations in one out of three queries.
- Jenny Wondracek, who leads a tracking project, has documented three instances of judges citing fake legal authority in their decisions.
The regulatory response: California’s legal system is rapidly implementing new AI oversight measures.
- The state’s Judicial Council issued guidelines two weeks ago requiring judges and court staff to either ban generative AI or adopt AI use policies by December 15.
- The California Bar Association is considering strengthening its code of conduct to account for AI use following a request by the California Supreme Court.
- Other states are implementing approaches including temporary suspensions and requiring caught attorneys to take ethics courses.
What they’re saying: Mostafavi acknowledged the risks while defending AI’s utility in legal practice.
- “In the meantime we’re going to have some victims, we’re going to have some damages, we’re going to have some wreckages,” he said. “I hope this example will help others not fall into the hole. I’m paying the price.”
- Mark McKenna from UCLA’s Institute of Technology, Law & Policy called the fine appropriate punishment for “an abdication of your responsibility as a party representing someone.”
- UCLA law professor Andrew Selbst noted the pressure on legal professionals: “This is getting shoved down all our throats. It’s being pushed in firms and schools and a lot of places and we have not yet grappled with the consequences of that.”
The bigger picture: The problem stems from AI models’ tendency to “hallucinate”—generate convincing but fake information—particularly when supporting facts don’t exist.
- “The harder your legal argument is to make, the more the model will tend to hallucinate, because they will try to please you,” Charlotin explained.
- Experts expect the issue to worsen before improving due to rushed AI adoption without proper training on ethical use.
- Many lawyers still don’t understand that AI fabricates information or believe legal tech tools can eliminate all false material generated by language models.
California issues historic fine over lawyer’s ChatGPT fabrications