back
Get SIGNAL/NOISE in your inbox daily

The 22nd edition of The Bluebook, released in May, introduces Rule 18.3 for citing AI-generated content, but legal experts are calling the new citation standard fundamentally flawed and confusing. The Bluebook acts a foundational guide for the legal profession, offering best practices. Critics argue the new rule treats AI as a citable authority rather than a research tool, creating more confusion than clarity for legal professionals navigating AI citations.

What the rule requires: Authors must save screenshots of AI output as PDFs when citing generative AI content like ChatGPT conversations or Google search results.

  • The rule has three sections covering large language models, search results, and AI-generated content, each with slightly different citation requirements.
  • All AI citations must include specific formatting and preservation requirements.

Why experts are critical: Legal scholars argue the rule misunderstands how AI should be used in legal research and writing.

  • “In 99% of cases, we shouldn’t be citing AI at all. We should cite the verified sources AI helped us find,” wrote Susan Tanner, a University of Louisville law professor who called the rule “bonkers.”
  • Jessica R. Gunder from the University of Idaho College of Law noted that AI citations should only document what was said by the AI, not the truth of what was said.

Technical problems identified: The rule contains internal inconsistencies and unclear distinctions between AI categories.

  • Cullen O’Keefe from the Institute for Law & AI pointed out that the rule differentiates between large language models and “AI-generated content,” even though large language models are a type of AI-generated content.
  • The rule shows inconsistencies about when to use company names with model names and when to require generation dates and prompts.

What appropriate AI citations look like: Experts suggest AI should only be cited in rare cases where the AI’s output itself is the subject of discussion.

  • Tanner’s example: “OpenAI, ChatGPT-4, ‘Explain the hearsay rule in Kentucky’ (Oct. 30, 2024) (conversational artifact on file with author) (not cited for accuracy of content).”
  • Gunder provided another scenario: citing AI output to highlight its unreliability, such as when an AI tool suggested adding glue to pizza recipes to prevent cheese from falling off.

The bottom line: While experts commend The Bluebook editors for addressing AI citations, they argue the rule “lacks the typical precision for which The Bluebook is (in)famous” and may create more confusion than guidance for legal professionals.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...