×
NY court rejects AI avatar in courtroom as judges crack down on digital deception
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The arrival of AI avatars in courtrooms highlights the legal system’s unprepared state for handling artificially generated representations in formal proceedings. A recent incident in New York’s Supreme Court Appellate Division demonstrates how judicial authorities are drawing firm boundaries around AI use in legal settings, particularly when it involves misrepresentation or could potentially undermine court processes.

What happened: A plaintiff in an employment dispute attempted to use an AI-generated avatar to present arguments before a New York appeals court, prompting an immediate shutdown by the presiding justice.

  • Jerome Dewald, representing himself without an attorney, submitted what appeared to be a video of a young man in professional attire to deliver his argument before a panel of five justices.
  • Justice Sallie Manzanet-Daniels halted the presentation within seconds after discovering the “person” on screen was actually an AI-generated avatar rather than a real attorney or the plaintiff himself.

The plaintiff’s explanation: Dewald claimed he created the avatar to deliver a more polished presentation than he believed he could personally provide.

  • He told The Associated Press he had applied for permission to play a prerecorded video but hadn’t explicitly disclosed that it would feature an AI-generated speaker.
  • Dewald initially attempted to create a digital replica of himself but was unable to accomplish this before the hearing date.

The court’s reaction: Judges responded with immediate concern about being misled about the nature of the presentation.

  • “I don’t appreciate being misled,” Justice Manzanet-Daniels stated before ordering the video shut off and allowing Dewald to continue his argument conventionally.
  • Dewald later submitted a formal apology to the court, acknowledging that his actions had upset the judges.

The bigger picture: This incident represents the latest in a series of AI missteps in legal settings as the technology finds its way into traditional professional domains.

  • In June 2023, two attorneys and their law firm were each fined $5,000 after using an AI tool for legal research that resulted in citations of fictitious legal cases.
  • These incidents highlight the growing tension between technological innovation and the legal system’s need for authenticity, transparency, and established protocols.
An AI avatar tried to argue a case before a New York court. The judges weren't having it

Recent News

New joint venture Teranexa brings AI-powered Smart City solutions to municipalities

The partnership combines data monetization and IT deployment expertise to deliver unified AI platforms for cities through established technology partners like IBM and HPE.

Digg returns: Original founder and Reddit co-creator team up to fix toxic social media

Two internet pioneers are reimagining Digg as a solution to toxic online discourse by prioritizing meaningful human connections over engagement-driven outrage.

Who’s watching? Healthcare systems averaging 70 hidden AI applications, risking patient data

Healthcare systems are discovering an average of 70 AI applications embedded in their everyday software tools, potentially exposing sensitive patient data to unauthorized access and third-party language models.