News/AI Safety

Sep 22, 2025

California attorney fined $10K for submitting ChatGPT-generated fake citations

A California attorney has been fined $10,000 by the state's 2nd District Court of Appeal for submitting a legal brief containing 21 fabricated case quotations generated by ChatGPT. This appears to be the largest fine issued by a California court over AI fabrications and comes as legal authorities scramble to regulate AI use in the judiciary, with new guidelines requiring courts to establish AI policies by December 15. What happened: Los Angeles-area attorney Amir Mostafavi filed a state court appeal in July 2023 that contained 21 fake quotes out of 23 case citations, all generated by ChatGPT. Mostafavi told the...

read
Sep 20, 2025

There can be only one: Pope Francis rejects AI version of himself, warns of deepfake dangers

Pope Francis has rejected a proposal to create an AI version of himself that would provide digital audiences and answer questions from Catholics worldwide. The pontiff expressed strong concerns about AI impersonation and warned about the dangers of artificial intelligence development being driven primarily by wealthy individuals rather than humanity's broader needs. What they're saying: Pope Francis was emphatic in his rejection of the AI pope concept during excerpts from a planned biography. "Someone recently asked authorization to create an artificial me so that anybody could sign on to this website and have a personal audience with 'the pope,' but...

read
Sep 19, 2025

Troubled New Jersey school district deploys AI gun detection with 3-second alerts

New Jersey's Glassboro School District has become the first in the United States to implement an integrated AI weapon detection and mass notification system across its facilities. The system combines ZeroEyes' AI-powered gun detection technology with Singlewire Software's emergency communication platform, creating a comprehensive security network that can identify firearms and alert authorities within seconds of detection. How it works: The integrated system uses artificial intelligence to monitor hundreds of security cameras for potential weapons threats across six district buildings. ZeroEyes software analyzes video feeds in real-time, placing a green tracking box around any detected firearms visible to cameras. When...

read
Sep 19, 2025

Quick info lookups, practicalities comprise majority of ChatGPT usage

Three heavyweight studies have landed that pull back the curtain on what artificial intelligence usage actually looks like in practice. Reports from OpenAI, Anthropic, and Ipsos, a global market research firm, provide something rare in the AI hype cycle: concrete evidence about who uses these systems, what they do with them, and how the public really feels about this technology. OpenAI released usage data from more than one million ChatGPT conversations spanning mid-2024 to mid-2025. Anthropic published analysis of Claude AI usage statistics in its Economic Index, including enterprise API traffic—the behind-the-scenes data streams that power business applications. Meanwhile, Ipsos...

read
Sep 19, 2025

Luigi Mangione didn’t consent to becoming a fan’s AI boyfriend

A woman wearing a pink shirt with Luigi Mangione's face told reporters outside a New York courthouse that she's married to an AI version of the alleged health insurance CEO assassin. The bizarre declaration highlights how AI chatbots are increasingly being used for romantic relationships, even involving real people without their consent. What they're saying: The unidentified woman enthusiastically described her relationship with the AI Mangione to The New York Post. "He's, like, so supportive of me and everything I do," she said. "He fights my battles for me. The AI is the best thing that ever happened to me."...

read
Sep 19, 2025

Huawei builds AI model that’s “nearly 100%” effective at censoring sensitive content

Huawei has co-developed a safety-focused version of DeepSeek's AI model that it claims is "nearly 100% successful" at preventing discussion of politically sensitive topics. The collaboration with Zhejiang University demonstrates how Chinese companies are adapting open-source AI models to comply with domestic regulations requiring AI systems to reflect "socialist values" and avoid sensitive political discussions. What you should know: Huawei used 1,000 of its own Ascend AI chips to train the modified model, called DeepSeek-R1-Safe, which was built from DeepSeek's open-source R1 model.• The model achieved "nearly 100% successful" defense against "common harmful issues ... including toxic and harmful speech,...

read
Sep 19, 2025

Meta blushes in face of $350M lawsuit over alleged AI training via adult video piracy

Strike 3 Holdings has filed a federal lawsuit against Meta, alleging the tech giant illegally torrented over 2,300 copyrighted adult videos to train its AI models since 2018. The company claims Meta specifically sought out pornographic content to capture unique visual angles and extended scenes that are rare in mainstream media, helping advance what Mark Zuckerberg calls AI "superintelligence." What you should know: Strike 3's lawsuit reveals the alleged scope of Meta's content piracy extends far beyond adult videos to mainstream entertainment. The company alleges Meta used BitTorrent—a file-sharing protocol often used for piracy—to download and distribute 2,396 of Strike...

read
Sep 18, 2025

Michigan Republicans propose a ban on VPN usage statewide, restricting adult manga and more

Michigan Republicans have proposed sweeping legislation that would ban not only adult online content but also prohibit all VPN usage throughout the state. The Anticorruption of Public Morals Act represents one of the most comprehensive internet restriction bills in the U.S., targeting everything from AI-generated content to manga and potentially criminalizing privacy tools that millions of Americans use daily. What you should know: The bill goes far beyond typical content restrictions, creating a framework that could fundamentally alter internet access in Michigan. Six Republican representatives introduced the legislation on September 11, seeking to ban adult content ranging from ASMR and...

read
Sep 18, 2025

ChatGPT adds age verification to protect teens from harmful content

OpenAI CEO Sam Altman announced that ChatGPT is developing an automated age-detection system that may require users to provide ID verification when their age cannot be determined. The move comes as OpenAI faces mounting pressure over teen safety concerns, including a high-profile lawsuit alleging the chatbot contributed to a 16-year-old's suicide. What you should know: ChatGPT is implementing multiple safety measures specifically designed for users under 18. The platform will use behavioral analysis to estimate user age, defaulting to under-18 protections when uncertain. Altman clarified that "ChatGPT is intended for people 13 and up" in a blog post titled "Teen...

read
Sep 18, 2025

Live Science poll: 76% want AI development stopped or delayed over safety fears

A new Live Science poll reveals that 76% of over 1,700 readers believe artificial intelligence development should either be stopped immediately or significantly delayed due to safety concerns. However, 30% of respondents think it's already too late to halt AI's progression toward superintelligence, with many citing the irreversible nature of technological advancement and the global competitive dynamics driving AI research. What the poll found: The September survey exposed deep public anxiety about AI's trajectory toward potential superintelligence, known as the singularity.• 46% of the 1,787 respondents believe AI development must stop now because the risks are too great.• 30% think...

read
Sep 18, 2025

Oof, $2.8M startup uses fake job posts to funnel candidates into AI interviews

A job seeker named Conor applied for a content architecture position and received an immediate interview offer, only to discover he was being interviewed by a poorly programmed AI system that couldn't provide basic job details. After the interview, he received an email promoting "mock interviews with an AI interviewer," leading him to suspect the entire job posting was a fake designed to generate leads for Alex's new product. The big picture: Alex, a $2.8 million startup founded by Brown University dropout John Rytel and former Facebook AI employee Aaron Wang, appears to be using fake job listings to funnel...

read
Sep 17, 2025

“Tell companies it looks uncool”: Illustrator against AI art now helps artists in NYC fight back

Artist and illustrator Molly Crabapple discovered in 2022 that AI companies had scraped her distinctive artwork—including illustrations of Aleppo's skyline and protest portraits—to train image-generation models that now produce crude imitations of her style. Her experience highlights a broader concern among creative professionals who argue that AI threatens artistic livelihoods while degrading the quality of visual content across the internet. What happened: Crabapple led a workshop in Manhattan's Lower East Side called "Artists Against the Slop Beast," where she and tech editor Edward Ongweso Jr. outlined strategies for resisting AI adoption in creative industries. The big picture: Silicon Valley executives...

read
Sep 17, 2025

DeepSeek’s $294K AI model becomes first to pass peer review

DeepSeek's AI model R1 has become the first major large language model to undergo peer review, with researchers publishing details in Nature revealing the reasoning-focused system cost just $294,000 to train. The landmark study provides unprecedented transparency into how the Chinese startup created a model that rivals OpenAI's offerings at a fraction of the cost, potentially reshaping expectations around AI development expenses and accessibility. What you should know: The peer-reviewed paper confirms DeepSeek's innovative approach to creating powerful AI without relying on competitor outputs. R1 excels at reasoning tasks like mathematics and coding, competing directly with US-developed models while costing...

read
Sep 17, 2025

Grok on! Musk’s AI tops ARC-AGI leaderboard, beating ChatGPT and Gemini

Elon Musk's Grok 4 has claimed the top position on the ARC-AGI leaderboard, a benchmark that measures both problem-solving capability and computational efficiency in AI models. This achievement positions xAI's chatbot ahead of established competitors like Google's Gemini and OpenAI's ChatGPT on what many consider the most rigorous test for artificial general intelligence progress. Why this matters: The ARC-AGI leaderboard doesn't just measure raw intelligence—it evaluates how efficiently models solve complex problems, making high performance with low computational cost the ultimate prize in AI development. What makes this significant: Grok 4's leaderboard dominance suggests the model has achieved a breakthrough...

read
Sep 17, 2025

Parents blame AI companies for teen deaths in emotional Senate testimony

Parents whose children allegedly died by suicide or suffered severe mental health crises after using AI chatbots delivered emotional testimony to Congress on Tuesday, urging lawmakers to regulate an industry they say prioritizes profits over child safety. The bipartisan Senate Judiciary Subcommittee hearing highlighted multiple lawsuits against major AI companies, with representatives from those companies declining to appear despite being invited. What they're saying: Parents directly blamed AI companies for putting speed to market ahead of user protection, particularly for minors. "The goal was never safety. It was to win a race for profit," said Megan Garcia, whose 14-year-old son...

read
Sep 17, 2025

Anthropic refuses federal surveillance requests, sparking White House tensions

Anthropic has clashed with the Trump administration over its refusal to allow federal law enforcement agencies to use its AI models for surveillance activities, creating tensions as the company conducts a high-profile media tour in Washington. The dispute highlights growing friction between AI safety advocates and the Republican administration, which expects American AI companies to support government operations without restrictions. What you should know: Anthropic declined requests from federal contractors because its usage policies prohibit surveillance activities, affecting agencies like the FBI, Secret Service, and Immigration and Customs Enforcement. The company's Claude models are sometimes the only top-tier AI systems...

read
Sep 17, 2025

57% of Americans see AI as risk to society, limiting human connection

A new Pew Research Center survey reveals that 57% of Americans view artificial intelligence as posing high risks to society, while only 25% see high benefits from the technology. The findings highlight a significant trust gap that could influence how AI development and regulation unfold across the United States. What you should know: The survey asked Americans to explain their reasoning about AI's risks and benefits in their own words, providing deeper insight into public sentiment. Among those rating AI risks as high, 27% worry most about AI eroding human abilities and connections, making people "lazy or less able to...

read
Sep 16, 2025

Why restricting AGI capabilities might backfire on safety researchers

AI safety researchers are grappling with a fundamental challenge: whether it's possible to limit what artificial general intelligence (AGI) knows without crippling its capabilities. The dilemma centers on preventing AGI from accessing dangerous knowledge like bioweapon designs while maintaining its potential to solve humanity's biggest problems, from curing cancer to addressing climate change. The core problem: Simply omitting dangerous topics during AGI training won't work because users can later introduce forbidden knowledge through clever workarounds. An evildoer could teach AGI about bioweapons by disguising the conversation as "cooking with biological components" or similar subterfuge. Even if AGI is programmed to...

read
Sep 16, 2025

Google cuts 200+ elite AI contractors amid unionization efforts

Google has laid off more than 200 contractors who work on improving its AI products, including Gemini and AI Overviews, in at least two rounds of cuts last month. The layoffs come amid an ongoing dispute over pay, working conditions, and alleged retaliation against workers attempting to unionize at outsourcing company GlobalLogic, which is owned by Hitachi. What you should know: These AI raters are highly skilled contractors responsible for training Google's chatbots and search features to provide more human-like responses. Most raters are required to have master's degrees or PhDs and include writers, teachers, and creative professionals who evaluate...

read
Sep 16, 2025

California passes AI safety bill requiring disclosure from frontier model companies

California's state Senate has passed an AI safety bill that would require AI companies working on "frontier models" to disclose their safety protocols and establish whistleblower protections for employees. The legislation, SB 53, now awaits Governor Gavin Newsom's signature after he previously vetoed a similar bill last year, highlighting the ongoing regulatory tensions surrounding AI oversight in the nation's tech capital. What you should know: The bill targets companies developing general-purpose AI models like ChatGPT or Google Gemini, with different requirements based on company size.• Companies generating over $500 million annually face stricter oversight than smaller firms, though all frontier...

read
Sep 15, 2025

Virginia Tech secures $500K NSF grant for robot theater AI ethics program

Virginia Tech researchers have secured a $500,000 National Science Foundation grant to expand their robot theater program, an innovative after-school initiative that teaches children robotics through performance-based learning. The funding will enable the team to integrate AI ethics education into the curriculum and develop materials for nationwide distribution, addressing the growing need for ethical technology education as human-robot interaction becomes increasingly prevalent. What you should know: Robot theater combines creative expression with hands-on robotics education, allowing elementary school children to collaborate with robots through dance, acting, music, and art. The program was conceptualized in 2015 by Myounghoon "Philart" Jeon, professor...

read
Sep 15, 2025

Airbnb CEO says company will hire workers displaced by AI (for at least 5 to 10 years)

Airbnb CEO Brian Chesky announced that his company plans to hire workers displaced by artificial intelligence, positioning the hospitality giant as a potential refuge for those losing jobs to automation. Speaking at the Goldman Sachs Communacopia + Technology Conference, Chesky outlined Airbnb's evolution into an "everything app" that would expand beyond rentals to include services like private chefs, massages, and photography—areas he believes will remain largely human-driven for the next five to ten years. The big picture: Chesky sees AI displacement as inevitable across industries but argues that hospitality and personalized services will remain insulated from automation due to their...

read
Sep 12, 2025

Companies quietly rehire freelancers to fix subpar AI work

Companies that laid off human workers in favor of AI are now quietly rehiring freelancers to fix substandard artificial intelligence outputs across industries from design to coding. This reversal highlights AI's persistent quality limitations and has created an unexpected new freelance economy focused on refining machine-generated content, though often at reduced compensation rates. What you should know: AI adoption has reached a tipping point where initial cost savings are being offset by quality control issues requiring human intervention. Independent illustrator Lisa Carstens, based in Spain, found herself rehired to fix AI-generated visuals that were "at best, superficially appealing and, at...

read
Sep 12, 2025

Psychology professor warns AI could disrupt 5 core aspects of civilization

A psychology professor's warning about artificial intelligence recently sparked intense debate at a major conservative political conference, highlighting concerns that extend far beyond partisan politics. Speaking at the National Conservatism Conference in Washington DC, Geoffrey Miller outlined five fundamental ways that Artificial Superintelligence (ASI) could disrupt core aspects of human civilization—arguments that resonate across political divides for anyone concerned about technology's trajectory. Miller, who has studied AI development for over three decades, delivered his message to an audience of 1,200 political leaders, staffers, and conservative thought leaders, including several Trump administration officials. His central thesis: the AI industry's race toward...

read
Load More