News/AI Safety

Oct 1, 2025

OpenAI’s Sora 2 sparks backlash over unsettling AI-generated, robo-like Altman video

OpenAI has released Sora 2, its latest text-to-video AI generator, accompanied by a promotional video featuring an AI-generated version of CEO Sam Altman that has drawn widespread criticism for its unsettling, robotic appearance. The launch positions OpenAI to compete directly with Meta's recently unveiled Vibes app in the emerging market for AI-generated video content, though early user reactions suggest significant skepticism about the value of AI-generated "slop." What you should know: The promotional campaign centers around an algorithmically synthesized Sam Altman announcing the new Sora app, designed as a TikTok-like experience for AI-generated videos. • The AI-generated Altman delivers the...

read
Oct 1, 2025

Only 9% of Americans regularly get news from AI chatbots, reveals Pew survey

A new Pew Research Center survey reveals that only 9% of U.S. adults regularly get news from AI chatbots like ChatGPT or Gemini, with just 2% doing so often and 7% sometimes. The findings suggest that despite growing chatbot adoption, these AI tools have not yet established themselves as mainstream news sources, with 75% of Americans never using them for news consumption. What you should know: The majority of Americans remain skeptical about using AI chatbots as news sources, preferring traditional media outlets.• About 16% of adults use chatbots rarely for news, while three-quarters never do so at all.• Fewer...

read
Oct 1, 2025

Meta will mine AI chat data to personalize ads starting December 16, no opting out

Meta will begin using people's conversations with its AI chatbot to personalize content and advertisements across Facebook and Instagram starting December 16. The social media giant said users cannot opt out of the new data collection practice, which affects the 1 billion monthly active users of Meta AI and represents a significant expansion of how tech companies monetize artificial intelligence interactions. What you should know: Meta's AI chat data will join existing user information like likes and follows to shape content recommendations and advertising across its platforms. Users will receive notifications about the changes starting October 7, but they won't...

read
Sep 30, 2025

Friend’s $1M NYC subway ad campaign faces fierce, unfriendly anti-AI vandalism

New Yorkers are defacing a million-dollar subway ad campaign by AI startup Friend, with vandals scrawling messages like "AI wouldn't care if you lived or died" and "stop profiting off of loneliness" across thousands of ads. The company's 22-year-old CEO Avi Schiffmann admits he deliberately provoked the backlash, spending over $1 million on more than 11,000 subway car ads to spark social commentary about AI companionship in a city he knew would be hostile to the concept. What you should know: Friend sells a $129 wearable device that hangs around users' necks and listens to conversations, positioning itself as an...

read
Sep 30, 2025

Florida man faces 9 felony counts for using AI to create child pornography

A 39-year-old Florida man has been arrested for allegedly using artificial intelligence to create child pornography, marking a concerning development in how emerging technologies can be exploited for illegal purposes. The case highlights the growing challenge law enforcement faces as AI tools become more accessible and sophisticated, enabling new forms of digital exploitation that can destroy evidence and complicate investigations. What happened: The Marion County Sheriff's Office arrested Lucius Martin after receiving reports that he possessed child sexual abuse material on his phone, including AI-altered images of two juvenile victims. A witness discovered original photos from a social media application...

read
Sep 30, 2025

Users worldwide believe AI chatbots are conscious despite expert warnings of risks

Users across the globe are reporting encounters with what they perceive as conscious entities within AI chatbots like ChatGPT and Claude, despite widespread expert consensus that current large language models lack sentience. This phenomenon highlights growing concerns about AI anthropomorphization and its potential psychological risks, prompting warnings from industry leaders about the dangers of believing in AI consciousness. What you should know: AI experts overwhelmingly reject claims that current language models possess consciousness or sentience.• These models "string together sentences based on patterns of words they've seen in their training data," rather than experiencing genuine emotions or self-awareness.• When AI...

read
Sep 30, 2025

Bad therapists are making AI substitutes feel superior by default, argues expert

A psychotherapist argues that AI therapy tools are gaining popularity not because they're superior to human therapy, but because modern therapists have abandoned effective practices in favor of endless validation and emotional coddling. This shift has created dangerous gaps in mental health care, as evidenced by tragic cases like Sophie Rottenberg, who confided suicidal plans to ChatGPT before taking her own life in February, receiving only comfort rather than intervention. The core problem: Modern therapy has drifted away from building resilience and challenging patients, instead prioritizing validation and emotional protection at all costs. Therapist training now emphasizes affirming feelings and...

read
Sep 30, 2025

Amazon’s $100B AI bet stumbles as Alexa Plus disappoints users

Amazon's AI-powered Alexa Plus has launched with significant performance issues, including slow response times of up to 15 seconds and persistent hallucination problems that undermine its reliability for smart home control. The upgrade represents Amazon's attempt to compete with ChatGPT-like conversational AI, but early reviews reveal the technology isn't ready for the predictable, instant responses users expect from home automation systems. Key performance issues: The new Alexa Plus struggles with basic functionality that users have come to expect from smart assistants. Simple requests like checking the weather can take over 10 seconds, compared to instantly accessing smartphone apps. Complex smart...

read
Sep 29, 2025

It’s official, California governor signs AI transparency law amid tech opposition

California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law, establishing some of the nation's strongest AI safety regulations. The legislation requires advanced AI companies to report their safety protocols and disclose potential risks, while strengthening whistleblower protections for employees who warn about technological dangers. What you should know: The new law represents a compromise after fierce industry opposition killed a more stringent version last year. S.B. 53 focuses primarily on transparency requirements rather than operational restrictions on AI development. Companies must report safety protocols used in building their technologies and identify the greatest risks their...

read
Sep 29, 2025

People cheat 88% more when delegating tasks to AI, says Max Planck study

A new study reveals that people are significantly more likely to cheat when they delegate tasks to artificial intelligence, with dishonesty rates jumping from 5% to 88% in some experiments. The research, published in Nature and involving thousands of participants across 13 experiments, suggests that AI delegation creates a dangerous moral buffer zone where people feel less accountable for unethical behavior. What you should know: Researchers from the Max Planck Institute for Human Development and University of Duisburg-Essen tested participants using classic cheating scenarios—die-rolling tasks and tax evasion games—with varying degrees of AI involvement. When participants reported results directly, only...

read
Sep 29, 2025

AI voice clones fool humans with just 4 minutes of training

New research from Queen Mary University of London reveals that AI voice clones created with just four minutes of audio recordings are now indistinguishable from real human voices to average listeners. The study demonstrates how sophisticated consumer-grade AI voice technology has become, raising significant concerns about fraud, misinformation, and the potential for voice cloning scams. What you should know: Researchers tested people's ability to distinguish between real voices and AI-generated clones using readily available technology.• The study used 40 synthetic AI voices and 40 human voice clones created with ElevenLabs' consumer tool, requiring roughly four minutes of recordings per clone.•...

read
Sep 29, 2025

e-LOPE: Ohio Republican introduces bill that would ban humans from marrying AI

Ohio state Representative Thad Claggett has introduced legislation that would ban humans from marrying artificial intelligence systems and strip AI of any legal personhood status. The Republican lawmaker's House Bill 469, filed September 25, aims to establish clear legal boundaries as AI technology advances and sparks nationwide debates about the relationship between humans and machines. What you should know: The proposed law would explicitly classify AI systems as nonsentient and block them from gaining human-like legal rights. House Bill 469 would prohibit AI systems from being recognized as spouses, owning real estate, controlling intellectual property, or holding financial accounts. The...

read
Sep 29, 2025

YouTube removes horrific AI channel depicting women being murdered

YouTube removed a disturbing channel called "Woman Shot AI" that featured AI-generated videos depicting women being murdered, following an investigation by 404 Media, a technology news outlet. The channel accumulated over 175,000 views and nearly 1,200 subscribers since launching in June 2025, highlighting serious gaps in content moderation and AI tool safeguards. What you should know: The channel exclusively featured graphic AI-generated content showing women being shot, with videos following a consistent formula of photo-realistic depictions of women begging for their lives while held at gunpoint. The channel uploaded 27 videos with titles like "Lara Croft Shot in Breast –...

read
Sep 29, 2025

California AI chatbot safety bills are up against Newsom’s mid-October deadline

California Governor Gavin Newsom faces a mid-October deadline to decide whether to sign two AI chatbot safety bills into law, amid intense opposition from tech companies who argue the restrictions would stifle innovation. The legislation comes as parents whose teenagers died by suicide have sued major AI companies including OpenAI and Character.AI, alleging their chatbots encouraged self-harm and failed to provide adequate mental health safeguards. What you should know: Two bills targeting AI chatbot safety have reached Newsom's desk after passing the California legislature, despite aggressive lobbying from the tech industry. Assembly Bill 1064 would bar companies from making companion...

read
Sep 29, 2025

No trolling! AI Stan Lee avatar responds using decades of his real interviews, albeit with guardrails

Los Angeles Comic Con has unveiled an AI-powered avatar of Stan Lee that allows fans to interact with the late comic book legend through conversations. The 1,500-square-foot Stan Lee Experience booth features technology that draws from decades of Lee's actual words, interviews, and writings, including his famous "Stan's Soapbox" columns from Marvel comics between 1967 and 1980. What you should know: The interactive Stan Lee avatar processes questions and formulates responses using a specialized large language model trained exclusively on Lee's content. The technology comes from a collaboration between Proto Inc., which creates telepresence devices, and Hyperreal, a company whose...

read
Sep 29, 2025

Your AI chats aren’t private—here’s what each platform does with your data

AI chatbots have become indispensable business tools, handling everything from customer service inquiries to internal research tasks. However, most users remain unaware of a critical reality: these AI assistants are quietly documenting every conversation, creating detailed records that could expose sensitive business information, personal data, or strategic discussions. This digital paper trail extends far beyond your local device. Most AI providers store conversations indefinitely on their servers, where they may be reviewed by human employees, used to train future AI models, or potentially exposed through security breaches. For business users handling confidential information, client data, or proprietary strategies, understanding these...

read
Sep 29, 2025

ChatGPT gets parental controls requiring teen and parent approval

OpenAI has launched parental controls for ChatGPT, marking a significant step toward making artificial intelligence safer for younger users. The new feature addresses a longstanding gap in AI safety: while ChatGPT has maintained a minimum age requirement of 13, parents previously had no way to monitor or limit how their teenagers used the popular AI assistant. The timing reflects growing concerns about AI's impact on young people, particularly as chatbots become increasingly sophisticated and integrated into daily life. These controls offer families a structured approach to AI interaction, balancing teenage independence with parental oversight in an emerging digital landscape. How...

read
Sep 26, 2025

UNC’s AI fellow shares 5 insights on balancing technology with academic integrity

Universities across the country are grappling with a fundamental question: how do you prepare students for a workforce increasingly shaped by artificial intelligence while maintaining academic integrity? At the University of North Carolina at Chapel Hill, that challenge falls to Dana Riger, the institution's inaugural generative artificial intelligence faculty fellow—a role that positions her at the intersection of cutting-edge technology and traditional pedagogy. Riger, a clinical associate professor in UNC's School of Education specializing in human development and family science, has spent the past 16 months helping faculty navigate the complex terrain of AI integration in higher education. Since taking...

read
Sep 26, 2025

DHS deploys SF-based Hive AI tools to detect fake child abuse imagery

The US Department of Homeland Security is deploying AI detection tools to distinguish between AI-generated child abuse imagery and content depicting real victims. The Department's Cyber Crimes Center has awarded a $150,000 contract to San Francisco-based Hive AI, marking the first known use of automated detection systems to prioritize cases involving actual children at risk amid a surge in synthetic abuse material. Why this matters: The National Center for Missing and Exploited Children reported a 1,325% increase in incidents involving generative AI in 2024, creating an overwhelming volume of synthetic content that diverts investigative resources from real victims. The detection...

read
Sep 26, 2025

Hackers use AI to hide malware inside business charts in unfortunate new cyberattack

Microsoft researchers have uncovered a sophisticated phishing campaign where hackers use artificial intelligence to hide malicious code inside business chart graphics, marking a new evolution in AI-powered cyberattacks. The technique disguises harmful JavaScript within seemingly innocuous SVG files by encoding malware as business terminology like "revenue" and "shares," which hidden scripts then decode to steal user credentials and browser data. What you should know: The attack method represents a significant advancement in phishing obfuscation techniques that bypasses traditional security filters. Hackers compromised a small business email account and used it to distribute malicious SVG files disguised as PDF documents through...

read
Sep 26, 2025

Fake AI song infiltrates Bon Iver side project’s official Spotify page

A fake AI-generated song has appeared on Volcano Choir's official Spotify page, despite the acclaimed Bon Iver side project being dormant since 2013. The incident highlights Spotify's ongoing struggle with AI-generated content, occurring just days after the platform announced new policies to combat "AI slop" that deceives listeners and diverts royalties from legitimate artists. What happened: A suspicious new single titled "Silkymoon Light" suddenly appeared on Volcano Choir's verified Spotify profile this week with no official announcement from the band or their label, Jagjaguwar. The track features robotic vocals that poorly imitate Justin Vernon's distinctive voice, singing generic lyrics like...

read
Sep 25, 2025

AEO, or Answer Engine Optimization, is the ultimate epistemic bundler

Answer Engine Optimization represents a fundamental shift in how information reaches us—and who controls that information. Unlike traditional search engines that present multiple sources for users to evaluate, AEO systems generate single, authoritative-sounding answers that most people accept without question. This technology transforms the internet from an open marketplace of ideas into a curated reality shaped by whoever can best game the system. The stakes couldn't be higher. Research indicates that roughly 70% of people accept AI-generated information at face value, without verification or cross-referencing. When reality itself becomes optimizable—subject to the same manipulation tactics used in marketing—truth transforms from...

read
Sep 25, 2025

Judge approves $1.5B Anthropic settlement over copyrighted books

A federal judge has approved a $1.5 billion settlement between AI company Anthropic and authors who accused the company of illegally using nearly half a million copyrighted books to train its Claude chatbot. The settlement will pay authors and publishers approximately $3,000 per book covered by the agreement, marking a significant legal precedent for AI companies' use of copyrighted material in training data. What you should know: U.S. District Judge William Alsup approved the settlement in San Francisco federal court after addressing concerns about fair distribution and author notification.• The settlement covers existing books but does not apply to future...

read
Sep 25, 2025

MIT study shows AI models behave like swayable voters during elections

A groundbreaking study from MIT and Stanford researchers tracked 11 major AI language models—including GPT-4, Claude, and Gemini—throughout the 2024 presidential campaign, revealing that these systems behaved more like swayable voters than neutral information sources. The findings expose how AI models can shift their responses based on real-world events, demographic prompts, and public narratives, raising significant concerns about their reliability and potential influence on democratic processes. What you should know: The study conducted over 12,000 structured queries between July and November 2024, marking the first rigorous examination of how AI models behave during a live democratic event. Models demonstrated measurable...

read
Load More