×
Stanford professor admits ChatGPT added false information to his court filing
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The use of AI tools in legal and academic contexts faces new scrutiny after a prominent misinformation researcher acknowledged AI-generated errors in a court filing.

The core incident: Stanford Social Media Lab founder Jeff Hancock admitted to using ChatGPT’s GPT-4o model while preparing citations for a legal declaration, resulting in the inclusion of fabricated references.

  • The document was filed in support of Minnesota’s “Use of Deep Fake Technology to Influence an Election” law
  • The law is currently being challenged in federal court by conservative YouTuber Christopher Khols and Minnesota state Rep. Mary Franson
  • Attorneys for the challengers requested the document be excluded after discovering non-existent citations

Technical details and impact: AI “hallucinations” – instances where AI models generate plausible but false information – led to the inclusion of incorrect citations in the legal document.

  • The AI tool fabricated two citations and incorrectly attributed authors in another reference
  • Hancock used GPT-4o specifically to organize citations and identify relevant articles, not to write the document’s content
  • The situation highlights the risks of relying on AI tools for academic and legal citation work without thorough verification

Defense and clarification: Hancock maintains that the citation errors do not undermine the fundamental arguments presented in his declaration.

  • He emphasized that he personally wrote and reviewed the declaration’s substance
  • The researcher states that his claims remain supported by current scholarly research
  • Hancock expressed regret for any confusion but stands behind the substantive points in his filing

Looking ahead: This incident serves as a cautionary tale about the limitations of AI tools in professional and legal contexts, particularly when accuracy and verification are paramount. The situation may influence future policies regarding AI use in legal documentation and academic work, while raising important questions about disclosure and verification requirements when AI tools are used to assist with professional documents.

Misinformation researcher admits ChatGPT added fake details to his court filing

Recent News

ChatGPT gets mental health upgrades following wrongful death case

A tragic case pushes AI companies to confront their role in users' mental health crises.

Fermi America partners with South Korea’s Doosan for 11GW nuclear AI campus

Small modular reactors offer a faster path to dedicated AI infrastructure power.

WhatsApp launches AI writing assistant with privacy-focused processing

Private Processing handles requests off-device where even Meta can't access your personal messages.