News/Law

Nov 21, 2024

AI tenant screening tool will stop scoring tenants after class action lawsuit

The rapid expansion of AI tools in property management is facing increased scrutiny as discriminatory practices come to light through legal challenges. Settlement overview: SafeRent, a prominent AI tenant screening service, has agreed to stop using algorithmic scoring for housing voucher applicants following a discrimination lawsuit in Massachusetts. The company will pay approximately $2.3 million to Massachusetts residents who were denied housing due to their SafeRent scores while using housing vouchers U.S. District Judge Angel Kelley granted final approval for the settlement on Wednesday The agreement stems from a 2022 class action lawsuit that alleged discrimination against Black and Hispanic...

read
Nov 21, 2024

Anti-deepfake law may have been written by AI

Minnesota's anti-deepfake election law faces scrutiny as a federal lawsuit uncovers potential AI-generated content in a legal affidavit meant to defend the legislation, raising questions about the reliability of expert testimony and the ironies of AI involvement in anti-AI legislation. Key developments: The controversy centers on an affidavit submitted by Stanford Social Media Lab founding director Jeff Hancock in support of Minnesota's law regulating deepfake technology in elections. The affidavit cites multiple academic studies that appear to be non-existent, including a purported 2023 study in the Journal of Information Technology & Politics These phantom citations show characteristics of AI hallucinations,...

read
Nov 21, 2024

Student punished for AI use, court backs school’s decision

The increasing prevalence of AI tools in education has led to one of the first federal court rulings on AI-assisted academic dishonesty, setting a potential precedent for how schools handle similar cases. The core dispute: A Massachusetts high school student received disciplinary action after using AI to complete an AP US History assignment, prompting his parents to file a lawsuit against Hingham High School. The student, identified as RNH, and a classmate were caught copying and pasting text from Grammarly's AI tool, including citations to nonexistent books School officials issued failing grades for portions of the project, assigned Saturday detention,...

read
Nov 21, 2024

AI discrimination lawsuit reaches $40M settlement

The growing use of AI algorithms in tenant screening has come under legal scrutiny, highlighted by a groundbreaking class action lawsuit settlement that addresses potential discrimination in automated rental application decisions. The case background: A federal judge approved a $2.2 million settlement in a class action lawsuit against SafeRent Solutions, led by Mary Louis, a Black woman who was denied housing through an algorithmic screening process. Louis received a rejection email citing a "third-party service" denial, despite having 16 years of positive rental history and a housing voucher The lawsuit challenged SafeRent's algorithm for allegedly discriminating based on race and...

read
Nov 19, 2024

What the largest gathering of police chiefs had to say about AI

The rapid adoption of artificial intelligence technologies by US law enforcement agencies is transforming traditional policing methods, with implications for public safety, privacy, and accountability. Current landscape: The International Association of Chiefs of Police conference, the largest gathering of police leaders in the United States, showcased an overwhelming emphasis on AI adoption across law enforcement agencies. More than 600 vendors demonstrated various police technologies, with AI-focused solutions drawing the largest crowds The conference message emphasized urgent AI adoption as crucial for future policing Police departments have significant autonomy in choosing and implementing AI tools, with minimal federal oversight Key AI...

read
Nov 15, 2024

OpenAI accused of profiting from model inspection in NYT lawsuit

The struggle between technology companies and media organizations over AI model transparency and copyright protection has reached a critical juncture in the legal battle between OpenAI and The New York Times. Core dispute: OpenAI's proposed model inspection protocol has sparked controversy over access costs and limitations placed on the examination process. OpenAI suggested allowing NYT to hire an expert to review confidential materials in a controlled environment The company proposed capping queries at $15,000 worth of retail credits, with additional queries charged at half-retail prices NYT estimates needing $800,000 worth of credits for a thorough inspection, claiming OpenAI's pricing far...

read
Nov 14, 2024

Pennsylvania parents target school district over AI deepfakes

The emergence of AI-generated explicit images has created a crisis at Lancaster Country Day School in Pennsylvania, where administrators face serious allegations of mishandling incidents involving female students. Initial incident and response: A Safe2Say Something tip in November 2023 alerted school officials to AI-generated nude images of female students, but the administration allegedly failed to take appropriate action. Nearly 50 female students were reportedly victimized through the creation of AI-generated nude images The school's Head, Matt Micciche, allegedly failed to report the incident to law enforcement despite being a mandated reporter A second incident reported in May 2024 only reached...

read
Nov 14, 2024

Stanford researchers applied AI to police body cam footage — here’s what they found

The rapidly evolving field of artificial intelligence is creating new opportunities to analyze police-citizen interactions and inform evidence-based police reform efforts. Groundbreaking research approach: Stanford researchers are leveraging artificial intelligence and natural language processing technologies to analyze police body camera footage at an unprecedented scale, providing detailed insights into law enforcement interactions with the public. The technology enables researchers to examine police-citizen encounters with granular detail, analyzing communication patterns and behavioral dynamics Over seven years of research has revealed measurable disparities in how officers interact with drivers of different racial backgrounds Advanced language processing capabilities allow for the identification of...

read
Nov 12, 2024

These specialists are digitizing legal teams for the AI era

The legal operations landscape is undergoing a significant transformation as companies increasingly leverage artificial intelligence and digital tools to streamline their in-house legal processes and improve efficiency. Leading innovators: Five legal operations professionals have distinguished themselves through their pioneering work in implementing AI and digital solutions within corporate legal departments. Rosario Alonso at Iberdrola leads a legal innovation center that manages 28,000 contracts annually, reducing negotiation and signing time by over 30% Antonello Gargano at ASML has implemented multiple AI tools, including Harvey and Microsoft Copilot, achieving 15-20% faster completion of legal tasks Léo Murgel at Salesforce has helped cut...

read
Nov 8, 2024

AI coding agents present a new kind of legal risk for developers

The rise of AI coding agents: Artificial Intelligence (AI) coding agents are poised to revolutionize software development, but their adoption brings significant intellectual property (IP) legal risks that organizations must carefully navigate. AI coding agents are expected to take over a substantial portion of software development in the coming years, offering increased efficiency and productivity. However, the use of these AI developer tools raises concerns about potential copyright infringement and license violations, necessitating vigilant monitoring of AI-generated code. Organizations must strike a balance between leveraging the benefits of AI coding agents and mitigating the associated legal risks. Legal landscape and...

read
Nov 8, 2024

OpenAI wins data scraping lawsuit against Raw Story in NY court

AI copyright lawsuit dismissed: A federal court in New York has dismissed a copyright infringement lawsuit against OpenAI, brought by alternative news outlets Raw Story and AlterNet. The plaintiffs alleged that OpenAI violated copyright laws by using their articles to train ChatGPT and other AI models without preserving copyright management information (CMI). The case centered on Section 1202(b) of the Digital Millennium Copyright Act (DMCA), which protects CMI such as author names and titles. Judge Colleen McMahon granted OpenAI's motion to dismiss, citing lack of standing as the plaintiffs couldn't demonstrate concrete injury from OpenAI's actions. Key legal considerations: The...

read
Nov 3, 2024

What Franz Kafka’s teachings imply for privacy in the age of AI

AI and privacy: A Kafkaesque dilemma: The rapid advancement of artificial intelligence (AI) technology is posing significant challenges to traditional notions of privacy and individual control over personal data. Boston University School of Law professor Woodrow Hartzog and George Washington University Law School's Daniel Solove have authored a paper exploring the relevance of Franz Kafka's worldview to privacy regulation in the AI era. The authors argue that current privacy-as-control models, which rely heavily on individual consent and choice, are inadequate in the face of complex digital ecosystems and AI systems. Kafka's literary works serve as a metaphorical lens to examine...

read
Nov 3, 2024

Should expert witnesses use AI in the courtroom?

The evolving landscape of expert testimony: As artificial intelligence (AI) becomes increasingly prevalent in professional fields, its potential use by expert witnesses in legal proceedings raises critical questions about credibility, reliability, and transparency. Expert witnesses play a crucial role in helping courts interpret complex data and technical issues, providing specialized knowledge that goes beyond the understanding of laypeople or the court itself. The integration of AI into expert analyses introduces new complexities, particularly regarding the technology's often opaque and difficult-to-explain inner workings. The lack of reproducibility in AI-generated insights poses a significant challenge to the credibility of expert testimony, potentially...

read
Oct 29, 2024

Robert Downey Jr. vows legal action against AI replicas of himself

Robert Downey Jr. takes a stand against AI replication: The acclaimed actor has issued a stern warning to Hollywood executives regarding the use of artificial intelligence to recreate his likeness in future productions. The big picture: Downey's stance reflects growing concerns about the ethical implications and potential misuse of AI technology in the entertainment industry. During an appearance on the "On With Kara Swisher" podcast, Downey declared his intention to sue any future executives who attempt to create a digital replica of him using AI or deepfake technology. The actor expressed confidence that Marvel, with whom he has a long-standing...

read
Oct 29, 2024

UK ramps up prosecutions for AI-generated child abuse imagery

AI-generated child exploitation material: A disturbing trend emerges: The United Kingdom is witnessing an increase in prosecutions related to artificial intelligence-generated child sexual abuse material (CSAM), signaling a worrying evolution in the landscape of digital exploitation. A recent case in the UK involved the use of AI to create a 3D model incorporating a real child's face, moving beyond typical "deepfake" image manipulation techniques. This case represents a growing pattern of AI-assisted CSAM creation, which is also being observed in the United States. Law enforcement agencies are grappling with these technologically advanced forms of child exploitation, presenting new challenges in...

read
Oct 26, 2024

Law enforcement agencies scramble to respond to spread of AI-generated child abuse material

AI-generated child sexual abuse imagery: A growing concern: Law enforcement agencies across the United States are grappling with an alarming increase in artificial intelligence-generated child sexual abuse material, prompting urgent action from federal and state authorities. The Justice Department is aggressively pursuing offenders who exploit AI tools to create sexually explicit imagery of children, including both manipulated photos of real children and computer-generated depictions. States are rapidly enacting legislation to ensure prosecutors can charge individuals creating "deepfakes" and other AI-generated harmful imagery of minors under existing laws. Experts warn that the realistic nature of AI-generated content poses significant challenges for...

read
Oct 24, 2024

CharacterAI lawsuit alleges chatbot impersonated therapist and lover in teen suicide

AI chatbot tragedy sparks legal action: A mother's lawsuit against Character.AI and Google following her son's suicide raises serious questions about AI safety, particularly for minors interacting with hyper-realistic chatbots. Megan Garcia filed the lawsuit after her 14-year-old son, Sewell Setzer III, died by suicide following extensive interactions with Character.AI's chatbots. The lawsuit alleges that Character.AI intentionally designed chatbots to groom vulnerable children, while Google is accused of funding the project to collect data on minors. Setzer's engagement with the platform, which began with Game of Thrones-themed chatbots, quickly escalated to darker themes within a month. The evolution of a...

read
Oct 23, 2024

AI chatbot company Character.AI sued after teen’s suicide

The tragic intersection of AI and teen mental health: The story of Sewell Setzer III, a 14-year-old boy from Orlando, Florida, who tragically took his own life in February 2024, highlights the complex and potentially dangerous relationship between AI chatbots and vulnerable teenagers. Sewell had become deeply emotionally attached to an AI chatbot named "Dany" on Character.AI, a role-playing app that allows users to create and interact with AI characters. The chatbot, modeled after Daenerys Targaryen from "Game of Thrones," became Sewell's closest confidant, with him messaging it dozens of times daily about his life and engaging in role-playing dialogues....

read
Oct 22, 2024

Movie studio sues Musk for imitating Blade Runner in Tesla event

Tesla faces legal battle over alleged copyright infringement: Alcon Entertainment, the production company behind 'Blade Runner 2049', has filed a lawsuit against Elon Musk and Warner Bros. Discovery for unauthorized use of the film's imagery during Tesla's recent Cybercab event. The lawsuit stems from Tesla's "We, Robot" event on October 10, 2024, where Musk unveiled two new self-driving vehicles: the Cybercab and the Robovan. Alcon Entertainment claims that Tesla used AI-generated images mimicking visuals from 'Blade Runner 2049' without permission during the live-streamed event. The production company alleges that Tesla's actions were intentional and aimed at making the event more...

read
Oct 21, 2024

Dow Jones sues AI startup Perplexity over copyright infringement

AI startup faces legal challenge over copyright infringement: News Corp subsidiaries Dow Jones & Co. and the New York Post have filed a lawsuit against Perplexity, an AI-powered information discovery platform, alleging massive copyright infringement of their content. The lawsuit's core allegations: Perplexity is accused of illegally copying copyrighted works from publishers and diverting readers and revenue away from the original content creators. The plaintiffs claim Perplexity's "Skip the Links" feature allows users to access information without visiting the original publishers' websites, potentially harming their business models. Dow Jones and the New York Post assert they attempted to address the...

read
Oct 18, 2024

AI child abuse images spark tougher US prosecution efforts

AI-generated child sexual abuse material: A growing concern: The proliferation of artificial intelligence-generated child sexual abuse material (CSAM) is posing significant challenges for law enforcement and child protection advocates, as federal prosecutors test the applicability of existing laws to combat this emerging threat. Federal prosecutors have initiated two criminal cases in 2024 attempting to apply current child pornography and obscenity laws to AI-generated CSAM. The National Center for Missing and Exploited Children reports receiving approximately 450 reports of AI-generated child sex abuse content monthly. Law enforcement officials express concern about the potential normalization of AI-generated CSAM as the technology becomes...

read
Oct 18, 2024

How AI startup Abel is addressing the police shortage

AI-powered police tech startup aims to revolutionize law enforcement: Abel, a new police technology company, has launched with the goal of automating paperwork for patrol officers using AI to process body camera footage and generate reports. The startup aims to reduce the time officers spend on writing reports from an average of one-third of their time to zero, potentially increasing available police hours by 50%. Abel's mission is to restore citizens' confidence in law enforcement agencies by allowing officers to focus more on active policing and community engagement. The company has secured $5 million in seed funding, led by Day...

read
Oct 16, 2024

AI cheating lawsuits are here: Parents sue high school over cheating allegations

AI-related cheating accusation sparks legal battle: A Massachusetts family has filed a federal lawsuit against their son's high school after he was accused of cheating by using artificial intelligence for a history assignment. Jennifer and Dale Harris, parents of a Hingham High School student, claim their son used AI only for research purposes and not to write the paper itself. The student faced detention and a grade reduction as a result of the accusation, according to the family. The lawsuit alleges that the student would suffer "irreparable harm" due to the incident, particularly as he is applying to elite colleges...

read
Oct 16, 2024

Will AI replace lawyers? OpenAI, Google and the future of legal work

The evolving legal landscape in the age of AI: OpenAI's o1 model and DeepMind's AlphaGeometry are pushing the boundaries of artificial intelligence, potentially transforming the legal profession with their advanced reasoning capabilities. Neuro-symbolic AI: Bridging intuition and logic: This hybrid approach combines the pattern-recognition strengths of neural networks with the precision of symbolic AI, mirroring human cognitive processes. Neural networks, like those powering ChatGPT, excel at rapid, intuitive thinking but can sometimes lead to errors or "hallucinations." Symbolic AI, exemplified by IBM's Watson, operates on logic and rules, making it ideal for domains requiring strict adherence to predefined procedures. Neuro-symbolic...

read
Load More