×
Stanford professor admits ChatGPT added false information to his court filing
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The use of AI tools in legal and academic contexts faces new scrutiny after a prominent misinformation researcher acknowledged AI-generated errors in a court filing.

The core incident: Stanford Social Media Lab founder Jeff Hancock admitted to using ChatGPT’s GPT-4o model while preparing citations for a legal declaration, resulting in the inclusion of fabricated references.

  • The document was filed in support of Minnesota’s “Use of Deep Fake Technology to Influence an Election” law
  • The law is currently being challenged in federal court by conservative YouTuber Christopher Khols and Minnesota state Rep. Mary Franson
  • Attorneys for the challengers requested the document be excluded after discovering non-existent citations

Technical details and impact: AI “hallucinations” – instances where AI models generate plausible but false information – led to the inclusion of incorrect citations in the legal document.

  • The AI tool fabricated two citations and incorrectly attributed authors in another reference
  • Hancock used GPT-4o specifically to organize citations and identify relevant articles, not to write the document’s content
  • The situation highlights the risks of relying on AI tools for academic and legal citation work without thorough verification

Defense and clarification: Hancock maintains that the citation errors do not undermine the fundamental arguments presented in his declaration.

  • He emphasized that he personally wrote and reviewed the declaration’s substance
  • The researcher states that his claims remain supported by current scholarly research
  • Hancock expressed regret for any confusion but stands behind the substantive points in his filing

Looking ahead: This incident serves as a cautionary tale about the limitations of AI tools in professional and legal contexts, particularly when accuracy and verification are paramount. The situation may influence future policies regarding AI use in legal documentation and academic work, while raising important questions about disclosure and verification requirements when AI tools are used to assist with professional documents.

Misinformation researcher admits ChatGPT added fake details to his court filing

Recent News

Veo 2 vs. Sora: A closer look at Google and OpenAI’s latest AI video tools

Tech companies unveil AI tools capable of generating realistic short videos from text prompts, though length and quality limitations persist as major hurdles.

7 essential ways to use ChatGPT’s new mobile search feature

OpenAI's mobile search upgrade enables business users to access current market data and news through conversational queries, marking a departure from traditional search methods.

FastVideo is an open-source framework that accelerates video diffusion models

New optimization techniques reduce the computing power needed for AI video generation from days to hours, though widespread adoption remains limited by hardware costs.