The use of AI tools in legal and academic contexts faces new scrutiny after a prominent misinformation researcher acknowledged AI-generated errors in a court filing.
The core incident: Stanford Social Media Lab founder Jeff Hancock admitted to using ChatGPT’s GPT-4o model while preparing citations for a legal declaration, resulting in the inclusion of fabricated references.
Technical details and impact: AI “hallucinations” – instances where AI models generate plausible but false information – led to the inclusion of incorrect citations in the legal document.
Defense and clarification: Hancock maintains that the citation errors do not undermine the fundamental arguments presented in his declaration.
Looking ahead: This incident serves as a cautionary tale about the limitations of AI tools in professional and legal contexts, particularly when accuracy and verification are paramount. The situation may influence future policies regarding AI use in legal documentation and academic work, while raising important questions about disclosure and verification requirements when AI tools are used to assist with professional documents.