A courtroom encounter with an AI-generated “lawyer” has sparked controversy in the New York judicial system, highlighting the growing tension between artificial intelligence adoption and legal ethics. The incident raises significant questions about transparency, deception, and the appropriate boundaries for AI use in formal legal proceedings, especially as the technology becomes more convincingly human-like.
The big picture: An elderly man representing himself in a New York appellate court attempted to present arguments through an AI-generated video avatar without disclosing its artificial nature.
- Jerome Dewald, 74, who was representing himself in a dispute with a former employer, began playing a prerecorded video featuring what appeared to be a younger man in professional attire standing before a blurred background.
- When questioned by confused Justice Sallie Manzanet-Daniels about the identity of the person in the video, Dewald admitted, “I generated that. That is not a real person.”
Why this matters: The incident represents one of the first documented cases of AI misrepresentation in a formal courtroom setting, potentially establishing precedent for how courts will handle AI-generated content.
- The judge’s immediate negative reaction demonstrates the judicial system’s concern about transparency when artificial intelligence is used in official legal proceedings.
Reading between the lines: The judge’s strong reaction suggests courts will likely establish strict disclosure requirements for AI-generated content in legal proceedings.
- Justice Manzanet-Daniels expressed clear displeasure, stating “I don’t appreciate being misled” before ordering the video to be turned off.
- The incident highlights how realistic AI-generated content can potentially deceive even experienced legal professionals when introduced without proper disclosure.
Implications: This case foreshadows the emerging challenges courts will face as increasingly sophisticated AI tools become accessible to litigants, especially those representing themselves.
- Courts may need to develop explicit rules regarding AI-generated content, potentially requiring advance disclosure and authentication.
- The incident raises questions about whether self-represented litigants might use AI to appear more polished or professional than they actually are, potentially creating an unfair advantage.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...