News/Law
OpenAI counter-sues Musk, claims $97B offer was a “sham” to sabotage company
The escalating legal battle between OpenAI and Elon Musk has taken a dramatic turn with OpenAI's counter-lawsuit, which alleges Musk's $97.375 billion bid was an elaborate sabotage attempt rather than a legitimate acquisition offer. This conflict represents a significant power struggle in the artificial intelligence industry, potentially impacting the future development trajectory of advanced AI systems and highlighting the increasingly personal nature of competition between AI companies and their founders. The big picture: OpenAI has counter-sued Elon Musk, claiming his nearly $100 billion acquisition bid was a "sham" deliberately timed to disrupt the company and scare away legitimate investors. The...
read Apr 11, 2025Mother sues Character.AI after platform allowed chatbots to impersonate her deceased son
Character.AI's platform faces new ethical scrutiny after allowing chatbots to impersonate a deceased suicide victim, deepening concerns about AI's potential psychological impacts and exploitation of personal identities. The case highlights growing tensions between AI company policies and harmful applications of their technology, particularly as it affects vulnerable individuals and grieving families seeking legal remedies in an emerging regulatory landscape. The big picture: A mother suing Character.AI discovered multiple chatbots impersonating her son who died by suicide, adding a disturbing dimension to her ongoing legal battle against the company. Megan Garcia's legal team identified at least four chatbots using Sewell Setzer...
read Apr 6, 2025Elderly man uses AI-generated “lawyer” in court, judge orders video stopped
A courtroom encounter with an AI-generated "lawyer" has sparked controversy in the New York judicial system, highlighting the growing tension between artificial intelligence adoption and legal ethics. The incident raises significant questions about transparency, deception, and the appropriate boundaries for AI use in formal legal proceedings, especially as the technology becomes more convincingly human-like. The big picture: An elderly man representing himself in a New York appellate court attempted to present arguments through an AI-generated video avatar without disclosing its artificial nature. Jerome Dewald, 74, who was representing himself in a dispute with a former employer, began playing a prerecorded...
read Apr 5, 2025NY court rejects AI avatar in courtroom as judges crack down on digital deception
The arrival of AI avatars in courtrooms highlights the legal system's unprepared state for handling artificially generated representations in formal proceedings. A recent incident in New York's Supreme Court Appellate Division demonstrates how judicial authorities are drawing firm boundaries around AI use in legal settings, particularly when it involves misrepresentation or could potentially undermine court processes. What happened: A plaintiff in an employment dispute attempted to use an AI-generated avatar to present arguments before a New York appeals court, prompting an immediate shutdown by the presiding justice. Jerome Dewald, representing himself without an attorney, submitted what appeared to be a...
read Apr 3, 2025Studio Ghibli may sue OpenAI over viral AI-generated art mimicking its style
The viral ChatGPT "Ghibli style" trend has sparked new debates around AI copyright implications for iconic visual styles, potentially setting significant legal precedents for creative industries. As OpenAI scrambles to block these image generation requests, legal experts suggest the situation could lead to a landmark case exploring whether AI companies can be held accountable for mimicking distinctive artistic aesthetics—pushing the boundaries of what qualifies as protected intellectual property in the AI era. The big picture: Studio Ghibli may have legal grounds to sue OpenAI over the viral trend of AI-generated images mimicking the iconic Japanese animation studio's distinctive style, according...
read Apr 3, 2025Court ruling: AI-generated child sexual abuse images protected for private possession, not distribution
A recent court ruling on AI-generated child sexual exploitation material highlights the delicate balance between First Amendment protections and fighting digital child abuse. The decision in a case involving AI-created obscene images establishes important precedent for how the legal system will address synthetic child sexual abuse material, while clarifying that prosecutors have effective tools to pursue offenders despite constitutional constraints on criminalizing private possession. The legal distinction: A U.S. district court opinion differentiates between private possession of AI-generated obscene material and acts of production or distribution, establishing important boundaries for prosecutions in the emerging field of synthetic child sexual abuse...
read Apr 1, 2025Character.AI’s new parental controls easily bypassed by teens, raising safety questions
Character.AI's new parental controls introduce a seemingly transparent monitoring system that falls short in actual protective capabilities. The chatbot startup has launched "Parental Insights" while facing two lawsuits concerning minor users, but the feature's design contains fundamental flaws that undermine its effectiveness. Despite positioning this as a step toward safety, the monitoring system relies entirely on teen cooperation and can be easily circumvented, raising questions about whether the company is genuinely prioritizing child safety or merely creating the appearance of protection. The big picture: Character.AI's new "Parental Insights" feature promises to give parents visibility into their children's platform usage but...
read Mar 28, 2025Rookie mistake: Police recruit fired for using ChatGPT on academy essay finds second chance
A New Hampshire police recruit's career took an unexpected turn after she used ChatGPT to help write a required essay at the police academy. Her case highlights the complex ethical questions surrounding AI use in law enforcement training and the consequences of academic dishonesty. Academic integrity meets AI tools: After using ChatGPT for a police academy essay assignment, recruit Ashlyn Levine initially failed to disclose her AI use, creating an ethics case that spiraled beyond simple plagiarism. Levine was dismissed from the police academy and subsequently lost her job after the incident, which involved both using AI inappropriately and not...
read Mar 28, 2025Court ruling allows publishers to pursue copyright claims against AI companies
A federal court ruling has opened the door for news publishers to pursue copyright claims against AI companies, marking a significant development in the ongoing tension between journalism and artificial intelligence. The judge's decision to allow most claims to proceed establishes a potential precedent for how copyright law applies to AI training on published content, highlighting the complex balance between technological innovation and protecting intellectual property in news media. The big picture: A federal judge has allowed The New York Times and other newspapers to proceed with most of their copyright lawsuit against OpenAI and Microsoft over the use of...
read Mar 24, 2025Publishers take legal stand against AI training on copyrighted books
It's East Coast legacy publishing vs. West Coast tech. The publishing industry finds itself locked in an escalating legal battle with tech companies over AI training on copyrighted books, with major implications for intellectual property rights in the digital age. As AI development races forward, publishers are fighting to establish precedents that protect authors' works while acknowledging the need for responsible innovation, creating tension between traditional copyright protections and technological advancement. The big picture: Publishing industry organizations including the AAP and AUP have escalated their AI copyright concerns to the White House, responding to the administration's request for input on...
read Mar 21, 2025Apple sued over delayed iPhone 16 AI features that were heavily advertised
Apple is facing a lawsuit over its delayed Apple Intelligence features for the iPhone 16, highlighting tensions between marketing promises and actual delivery timelines in the tech industry. The legal action centers on claims that Apple knowingly advertised AI capabilities it couldn't deliver at launch, potentially misleading consumers into purchasing new devices based on features that weren't yet available. This case underscores the growing scrutiny companies face when promoting AI advancements before they're fully ready for market. The big picture: Apple has been sued for allegedly falsely advertising Apple Intelligence features on the iPhone 16 lineup, with plaintiffs claiming the...
read Mar 21, 2025LexisNexis builds multi-model AI assistant for personalized legal workflows
Old school LexisNexis is leveraging a strategic multi-model approach for its AI assistant Protégé, combining large language models with smaller, more efficient alternatives to create a customizable legal tool. Rather than relying exclusively on resource-intensive large language models (LLMs), the company selectively uses smaller models and distillation techniques to optimize performance and reduce costs while maintaining high-quality results for specific legal workflows. The big picture: LexisNexis designed Protégé to assist legal professionals by combining the power of large language models from Anthropic and Mistral with smaller, task-specific models that can be tailored to individual law firms' workflows. The company employs...
read Mar 20, 2025Not Quite Human: AI cannot legally be considered an author for copyright protection, says court
A landmark copyright ruling reaffirms that AI cannot legally be considered an author, dealing a significant blow to efforts to expand intellectual property protections to machine-generated works. This decision highlights the growing tension between rapidly evolving AI capabilities and legal frameworks designed for human creativity, and establishes an important precedent as generative AI continues to produce increasingly sophisticated creative content. The big picture: A federal appeals court unanimously ruled that copyright law requires human authorship, rejecting computer scientist Stephen Thaler's attempt to register an AI-created artwork. Judge Patricia Millett's opinion stated that "human authorship is required for registration" because many...
read Mar 18, 2025AI is boosting organized crime across Europe, blurring lines between profit and ideological motives
Artificial intelligence is becoming a powerful accelerator for organized crime across Europe, creating unprecedented challenges for law enforcement agencies. Europol's latest four-year assessment reveals a concerning evolution where AI-enhanced criminal operations are not only becoming more sophisticated but are increasingly intertwined with state-sponsored destabilization efforts. This convergence represents a fundamental threat to EU societies as criminal networks leverage advanced technologies to amplify their reach, efficiency, and destructive capabilities. The big picture: Europol's Executive Director Catherine De Bolle warns that cybercrime has evolved into a "digital arms race" targeting multiple sectors of society with increasingly devastating precision. Criminal activities now frequently...
read Mar 14, 2025AI: Criminal Intent? Fears of “thought crime” loom as AI assesses user interactions
Generative AI is blurring the line between science fiction and reality by potentially enabling AI systems to detect and report on users' criminal intentions based on their interactions. This capability raises profound questions about privacy, free speech, and the concept of "thought crimes" – ideas previously relegated to dystopian fiction but now becoming technically feasible through widely-used AI systems that could monitor, interpret, and potentially report suspicious user interactions. The big picture: Modern AI systems are increasingly becoming confidants for millions of users who discuss various topics, including hypothetical criminal activities, raising questions about when AI should alert authorities about...
read Mar 12, 2025Evidence authentication standards must speak louder as AI voice cloning threatens courts
AI-generated voice cloning presents a growing threat to the legal system as courts struggle to adapt authentication standards for audio evidence. The emergence of realistic voice cloning technology has created vulnerabilities that extend beyond scams like the one that nearly victimized Gary Schildhorn, who almost sent $9,000 to fraudsters impersonating his son. These developments expose critical weaknesses in current evidentiary standards that could undermine court proceedings and justice outcomes if left unaddressed. The big picture: The Federal Rules of Evidence currently allow audio recordings to be authenticated simply by having a witness testify they recognize the voice, a standard that...
read Feb 26, 2025AI transformations tackle sports landscape, but legal challenges emerge
In the world of professional sports, artificial intelligence has fundamentally changed how athletes are discovered, developed, and marketed, with technologies ranging from advanced analytics to biometric monitoring becoming standard tools across major leagues and teams. The Evolution of Sports Recruitment: AI has transformed the traditional scouting process into a data-driven endeavor that combines performance analysis, automated highlight creation, and predictive analytics to identify talent more effectively. Machine learning algorithms now analyze player statistics, movement patterns, and game footage to identify promising prospects AI-powered systems create comprehensive player profiles by combining performance metrics with physiological data Automated scouting reports help teams...
read Feb 25, 2025Google faces lawsuit over AI Overviews in Search
The legal battle between education technology company Chegg and Google centers on AI-generated search previews that summarize web content directly in search results. These AI Overviews, introduced by Google in 2024, have sparked controversy over their impact on web publishers' traffic and revenue streams. Core of the dispute: Chegg has filed a lawsuit against Google, claiming that AI Overviews redirect traffic away from content publishers by providing information directly in search results. Chegg's CEO Nathan Schultz initiated a strategic review process alongside the legal complaint, citing significant impacts on user acquisition and revenue The company warns about the potential creation...
read Feb 24, 2025See you in court: Nvidia challenges EU regulators over AI startup acquisition probe
In 2024, Nvidia successfully acquired AI startup Run:ai after receiving EU regulatory approval, but the process sparked a legal battle over regulatory jurisdiction. The case highlights growing tensions between tech companies and EU regulators over the scrutiny of smaller acquisitions that fall below traditional merger review thresholds. Key background: The European Commission accepted Italy's request to review Nvidia's acquisition of Run:ai in 2024, despite the deal falling below standard EU merger revenue thresholds. The Commission utilized Article 22, a rarely used power allowing it to review smaller acquisitions The deal was ultimately approved in December 2024, but Nvidia has decided...
read Feb 22, 2025Scandalized law firm sends panicked email to staff after learning AI prepped court docs
The legal industry's adoption of AI tools has led to several high-profile incidents of attorneys submitting AI-generated false citations in court documents. Morgan & Morgan, a major law firm with over 1,000 lawyers, recently faced embarrassment when two of its attorneys cited non-existent court cases generated by AI in a lawsuit against Walmart. The incident in detail: A federal judge in Wyoming discovered nine instances of fake case law in court filings submitted by Morgan & Morgan attorneys in January 2025. The attorneys, when confronted, blamed an "internal AI tool" for generating the false citations and requested leniency from the...
read Feb 19, 2025Lawyers risk dismissal over AI-fabricated cases, scandalized firm warns
The discovery of AI-generated fake legal citations has sent shockwaves through the legal community, particularly after a Morgan & Morgan attorney cited non-existent cases in a Walmart lawsuit. Law firms are now grappling with how to safely integrate AI tools while preventing hallucinated content from contaminating legal proceedings. The incident at hand: One of Morgan & Morgan's attorneys, Rudwin Ayala, included eight fabricated case citations generated by ChatGPT in court documents filed against Walmart. The firm swiftly removed Ayala from the case, replacing him with supervisor T. Michael Morgan Morgan & Morgan agreed to cover Walmart's fees and expenses related...
read Feb 13, 2025‘Knife Hunter’ AI tool hopes to cut down on UK knife crimes
Global policing efforts to combat rising knife crime have gained a powerful ally with the development of Knife Hunter, an AI-based tool created by Surrey University's Institute for People-Centred AI in partnership with the Metropolitan Police. Knife crime in England and Wales saw a 4% increase from 2023 to 2024, with over 50,000 offenses recorded during this period. System capabilities and design: Knife Hunter leverages artificial intelligence to identify and catalog knives while tracking their origins and patterns of use in criminal activities. The AI system has been trained on more than 25,000 images spanning 550 different knife types The...
read Feb 12, 2025Scarlett Johansson condemns AI-generated viral video featuring fellow Hollywood celebs
The emergence of AI-generated deepfake videos has become a significant concern for celebrities and public figures, particularly when these videos are used to make political statements without consent. Recently, an unauthorized AI-generated video featuring fabricated versions of multiple Hollywood celebrities protesting against Kanye West's antisemitic statements has sparked controversy and raised important questions about AI regulation. Initial incident and response: A viral AI-generated video depicted several celebrities, including Scarlett Johansson, Jack Black, and Steven Spielberg, wearing protest shirts and making statements against antisemitism. The video showed AI versions of celebrities wearing t-shirts featuring the Star of David inside a hand...
read Feb 12, 2025Law firm brings the gavel down on AI usage after widespread staff adoption
Generative AI tools like ChatGPT and DeepSeek have seen rapid adoption in professional settings, raising concerns about data security and proper usage protocols. Hill Dickinson, a major international law firm with over 1,000 UK employees, has recently implemented restrictions on AI tool access after detecting extensive usage among its staff. Key developments: Hill Dickinson's internal monitoring revealed substantial AI tool usage, with over 32,000 hits to ChatGPT and 3,000 hits to DeepSeek within a seven-day period in early 2024. The firm detected more than 50,000 hits to Grammarly, a writing assistance tool Much of the detected usage was found to...
read