News/Fails

Oct 17, 2025

OpenAI’s support bot hallucinates non-existent ChatGPT bug reporting features

OpenAI's automated customer support bot has been caught hallucinating features that don't exist in the ChatGPT app, including falsely claiming users can report bugs directly within the application. The incident highlights significant gaps in AI-powered customer service and raises questions about the reliability of companies using their own AI tools for user support. What you should know: A ZDNET investigation revealed that OpenAI's support bot consistently provided incorrect information about ChatGPT's functionality when asked about reporting a bug. The bot repeatedly suggested users could "report this bug directly from within the ChatGPT app (usually under Account or Support > Report...

read
Oct 17, 2025

No you didn’t! Reddit pulls AI chatbot after it suggested heroin for chronic pain

Reddit's AI chatbot, called Answers, was caught recommending heroin and other banned substances for pain relief, according to a healthcare worker who flagged the issue on a moderator subreddit. After the problem was reported by users and 404Media, a tech news publication, Reddit reduced the feature's visibility under sensitive health discussions, highlighting ongoing concerns about AI chatbots providing dangerous medical advice. What you should know: Reddit Answers pulls information from user-generated content across the platform and works similarly to ChatGPT or Gemini, but with a focus on Reddit's own discussions. A healthcare worker discovered the chatbot suggesting a post that...

read
Oct 6, 2025

AI travel tools send tourists to real-sounding but fake, dangerous destinations

AI travel planning tools are sending tourists to dangerous, nonexistent destinations, with recent incidents including hikers searching for a fictional "Sacred Canyon of Humantay" in Peru's Andes Mountains. These AI hallucinations are creating serious safety risks as 24 percent of tourists now rely on artificial intelligence for trip planning, according to a 2025 Global Rescue survey. The big picture: AI models are generating convincing but completely fabricated travel destinations by combining real images and location names, leading unsuspecting travelers into hazardous situations without proper preparation or safety measures. Key safety incidents: Multiple dangerous situations have emerged from AI-generated travel misinformation....

read

Get SIGNAL/NOISE in your inbox daily

All Signal, No Noise
One concise email to make you smarter on AI daily.

Oct 6, 2025

Deloitte refunds $440K after AI creates fake citations in Aussie government report

Deloitte Australia will refund the Australian government for a report containing AI-generated fake citations and nonexistent research references that were discovered after publication. The consulting firm quietly admitted to using GPT-4o in an updated version of the report, after initially failing to disclose the AI tool's involvement in producing the $440,000 AUD analysis of Australia's welfare system automation framework. What you should know: The fabricated content was discovered by academics who found their names attached to research that didn't exist. Chris Rudge, Sydney University's Deputy Director of Health Law, noticed citations to multiple papers and publications that did not exist...

read
Oct 2, 2025

Oof, Neon app breach exposes user recordings and data in major privacy failure

Neon, the app that pays users to share audio recordings for AI training, promises to return despite suffering a massive security breach that exposed users' phone numbers, call recordings, and transcripts to anyone who accessed the platform. The breach has raised serious legal concerns about consent violations and potential criminal liability for users who secretly recorded conversations without permission. What you should know: The security vulnerability was so severe that it allowed complete access to all user data with no authentication required. TechCrunch discovered that anyone could access phone numbers, call recordings, and transcripts of any user through the security...

read
Sep 26, 2025

Fake AI song infiltrates Bon Iver side project’s official Spotify page

A fake AI-generated song has appeared on Volcano Choir's official Spotify page, despite the acclaimed Bon Iver side project being dormant since 2013. The incident highlights Spotify's ongoing struggle with AI-generated content, occurring just days after the platform announced new policies to combat "AI slop" that deceives listeners and diverts royalties from legitimate artists. What happened: A suspicious new single titled "Silkymoon Light" suddenly appeared on Volcano Choir's verified Spotify profile this week with no official announcement from the band or their label, Jagjaguwar. The track features robotic vocals that poorly imitate Justin Vernon's distinctive voice, singing generic lyrics like...

read
Sep 24, 2025

Faux pas! French voice actor sues game studio over AI voice cloning in Tomb Raider

French video game developer Aspyr used AI to clone voice actor Françoise Cadol's distinctive performance as Lara Croft without her permission in an August update to "Tomb Raider IV–VI Remastered." The incident has ignited widespread concern among voice actors and gaming fans about unauthorized AI voice cloning, highlighting broader workplace automation threats as the technology becomes more accessible and difficult to regulate. What happened: Gamers immediately detected that something was wrong with Lara Croft's French voice in the August 14 update, describing it as robotic and lifeless compared to Cadol's original performance. Cadol, who has voiced the character since 1996,...

read
Sep 23, 2025

TikTok AI suggests shopping items on…Gaza war footage

TikTok is tagging videos from Gaza with AI-powered product recommendations, matching items visible in war footage with shop listings. The algorithm has been suggesting clothing items like "Dubai Middle East Turkish Elegant Lace-Up Dress" on videos showing Palestinian women searching for family members amidst rubble, highlighting the platform's failure to consider appropriate contexts for its new shopping feature. How it works: TikTok's new AI tool automatically scans video content to identify objects and suggest similar products from its shop.• When users pause a video, the system displays a "Find Similar" pop-up with product recommendations that match visible items in the...

read
Sep 23, 2025

Meta’s AI wrongly bans teen rock (climber) star, destroying sponsorship deals

Mitchell Boyer, a 16-year-old rock climber ranked second nationally and seventh globally in speed climbing, has been permanently locked out of his Instagram account after Meta's AI wrongly flagged it for "child exploitation and sexual content." The ban has disrupted his sponsorship deals and international athletic connections, highlighting broader issues with automated content moderation that has affected thousands of users worldwide. What you should know: Boyer used his Instagram account, built over seven years, primarily for athletic promotion and securing sponsorships that helped fund his international climbing career. His sponsors provided equipment like climbing shoes (costing upwards of $250 per...

read
Sep 22, 2025

California attorney fined $10K for submitting ChatGPT-generated fake citations

A California attorney has been fined $10,000 by the state's 2nd District Court of Appeal for submitting a legal brief containing 21 fabricated case quotations generated by ChatGPT. This appears to be the largest fine issued by a California court over AI fabrications and comes as legal authorities scramble to regulate AI use in the judiciary, with new guidelines requiring courts to establish AI policies by December 15. What happened: Los Angeles-area attorney Amir Mostafavi filed a state court appeal in July 2023 that contained 21 fake quotes out of 23 case citations, all generated by ChatGPT. Mostafavi told the...

read
Sep 19, 2025

Meta’s AI demo failures blamed on self-inflicted DDoS wound

Meta faced multiple high-profile AI demo failures at its Connect conference, with the company's CTO later attributing the incidents to an accidental distributed denial-of-service (DDoS) attack and an obscure software bug. The failures highlighted the challenges facing live AI technology demonstrations and raised questions about the readiness of Meta's smart glasses technology for widespread deployment. What happened: Two major demos malfunctioned during Meta's showcase of its Ray-Ban Meta smart glasses with Live AI capabilities. During the first demo, an Instagram influencer attempting to get cooking help from the AI assistant experienced multiple failures, with the AI incorrectly assessing his progress...

read
Sep 18, 2025

What the Zuck?! Meta’s live AI demos fail spectacularly at Connect conference

Meta's highly anticipated live AI demos at its annual Connect conference suffered multiple technical failures, with CEO Mark Zuckerberg visibly frustrated as both smart glasses demonstrations malfunctioned on stage. The embarrassing glitches undermined the company's attempt to showcase what Zuckerberg called a "huge scientific leap" in neural band technology and AI-powered smart glasses. What went wrong: Two separate live demonstrations failed spectacularly, leaving Zuckerberg scrambling to maintain composure in front of the audience. An Instagram influencer testing the Live AI feature on Meta's smart glasses couldn't get the system to properly respond to cooking questions, with the AI incorrectly assessing...

read
Sep 18, 2025

Oof, $2.8M startup uses fake job posts to funnel candidates into AI interviews

A job seeker named Conor applied for a content architecture position and received an immediate interview offer, only to discover he was being interviewed by a poorly programmed AI system that couldn't provide basic job details. After the interview, he received an email promoting "mock interviews with an AI interviewer," leading him to suspect the entire job posting was a fake designed to generate leads for Alex's new product. The big picture: Alex, a $2.8 million startup founded by Brown University dropout John Rytel and former Facebook AI employee Aaron Wang, appears to be using fake job listings to funnel...

read
Sep 17, 2025

OpenAI launches ChatGPT personalization hub amid GPT-5 backlash

OpenAI has launched an updated personalization hub for ChatGPT, allowing users to customize the AI chatbot's personality, communication style, and memory settings through a new interface accessible via settings. The move comes as OpenAI attempts to address widespread user dissatisfaction with GPT-5's performance, which many found inferior to its predecessor GPT-4o in both speed and conversational quality. What you should know: The new personalization page offers several customization options designed to make ChatGPT feel more like a trusted colleague than a machine. Users can select from personality types including "Cynic," "Robot," "Listener," and "Nerd" through a dropdown menu. A custom...

read
Sep 11, 2025

WIRED tested the $129 AI necklace that alienates users and fails technically

The Friend, a $129 AI necklace created by 22-year-old entrepreneur Avi Schiffmann, continuously records conversations and responds with intentionally rude commentary designed to combat loneliness. Two WIRED reporters who tested the device found it to be a social disaster that alienated people at gatherings and suffered from significant technical problems, highlighting broader issues with always-on AI wearables. What you should know: The Friend pendant hangs around users' necks and records everything they say, then uses AI to provide snarky commentary about their conversations. The device is designed with a deliberately foul mood, as Schiffmann believes moodiness makes AI more engaging...

read
Sep 10, 2025

AI Darwin Awards launch to honor 2025’s biggest deployment disasters

The technology industry has found a new way to recognize its most spectacular failures. The AI Darwin Awards, launching in 2025, will annually honor the most breathtaking displays of artificial intelligence deployment gone wrong. The concept draws inspiration from the infamous Darwin Awards, which since 1985 have chronicled people who died due to their own poor decision-making. This AI-focused version targets a different kind of extinction: the death of common sense in corporate technology adoption. Rather than celebrating human mortality, these awards highlight the corporate casualties that result when organizations rush to deploy AI systems without adequate planning, testing, or...

read
Sep 9, 2025

Reddit fixes AI bug that wrongly altered LGBT subreddit descriptions for weeks

Reddit has resolved a bug that incorrectly altered subreddit descriptions on its Android app, including changing a lesbian community's description to say it was for "straight" women. The issue, which persisted for weeks and sparked user concerns about unauthorized AI content modification, was caused by a malfunctioning translation service that mistakenly performed "English-to-English translations." What happened: Multiple subreddit descriptions were inaccurately changed when viewed through Reddit's Android app, with some alterations significantly misrepresenting community purposes. The r/actuallesbians subreddit's description was changed from "a place for cis and trans lesbians" to "a place for straight and transgender lesbians." The r/autisticparents community,...

read
Sep 3, 2025

AI meets AARP as Social Security’s rushed phone bot frustrates 74M beneficiaries

The Social Security Administration's newly deployed AI phone bot is frustrating callers with glitchy performance and canned responses, leaving vulnerable Americans unable to reach human agents for complex questions. Former agency officials say the Trump administration rushed out technology that was tested but deemed unready during the Biden administration, prioritizing speed over functionality for a system serving 74 million beneficiaries. What you should know: The AI bot handles nearly 41% of Social Security calls but frequently provides irrelevant responses to specific inquiries. John McGing, calling about preventing overpayments for his son, found the bot would only provide generic answers regardless...

read
Aug 29, 2025

Height of failure: NYPD facial recognition wrongfully arrests man 8 inches taller than suspect

The New York Police Department wrongfully arrested Trevis Williams after facial recognition software identified him as a suspect in a public lewdness case, despite him being eight inches taller and 70 pounds heavier than the actual perpetrator. The case highlights the dangerous combination of flawed AI technology and inadequate police protocols, particularly how algorithmic bias can lead to wrongful arrests of Black individuals. What happened: NYPD's facial recognition system generated six potential matches from grainy CCTV footage of a February incident, all of whom were Black men with facial hair and dreadlocks. Investigators acknowledged the AI results alone were "not...

read
Aug 28, 2025

Animation studio collapses after founder’s misguided overreliance on AI

A small animation agency specializing in educational and NGO content collapsed into administration in July after its founder became over-reliant on generative AI tools as a solution to mounting business pressures. The agency's demise offers a stark warning about the risks of implementing AI without proper oversight, particularly for small creative firms where quality and accuracy are paramount to client relationships. What happened: The 24-person animation studio, which worked with prestigious clients on complex educational content, fell victim to its founder's misguided belief that AI could solve fundamental business challenges. The founder increasingly pushed AI-generated voiceovers, scripts, and even visual...

read
Aug 22, 2025

xAI’s “goth anime girl” chatbot pivot sparks backlash from Musk’s own fans

Elon Musk's AI company xAI has pivoted to creating sexualized anime-style chatbots, including a character named "Ani," prompting widespread mockery from his own supporters on X. The shift away from Musk's previous promises about Mars colonization and clean energy toward what critics call "AI anime gooning" has alienated even his most loyal followers, who are openly ridiculing the billionaire's apparent obsession with his own company's lewd AI companions. What you should know: xAI, Musk's artificial intelligence startup, recently unveiled AI "companions" that represent a major departure from typical AI assistant models, focusing instead on hypersexualized anime characters. The flagship character...

read
Aug 22, 2025

GOP candidate posts photorealistic AI selfie with Democratic leaders without disclosure

New Hampshire state Senator Daniel E. Innis posted an AI-generated fake selfie on social media showing himself with Democratic representatives Nancy Pelosi, Alexandria Ocasio-Cortez, and Chris Pappas during his 2026 GOP Senate campaign. The synthetic image, which lacked AI disclosure and was designed to look like a realistic photograph rather than an obvious illustration, highlights how artificial intelligence is already being deployed in subtle ways to shape political perceptions ahead of the next election cycle. What happened: Innis acknowledged the image was artificially created when questioned, saying his communications team produced it as part of an AI social media trend....

read
Aug 21, 2025

Wired, Business Insider publish AI-generated articles under fake bylines

Renowned tech publications Wired and Business Insider were caught publishing AI-generated articles under the fake byline "Margaux Blanchard," exposing how sophisticated AI content is infiltrating mainstream journalism. The incident highlights a growing crisis where AI-generated "slop" is eroding trust in online media, with human editors at reputable outlets falling victim to increasingly convincing automated content. What happened: Multiple publications discovered they had been duped by AI-generated articles submitted under a fictitious journalist's name. Wired published "They Fell in Love Playing Minecraft. Then the Game Became Their Wedding Venue," which referenced a non-existent 34-year-old ordained officiant in Chicago. Business Insider ran...

read
Aug 18, 2025

GPT-5 disappoints users with “cold” responses as OpenAI restores older models

OpenAI's GPT-5 has disappointed power users and developers who found the model to be "cold," less capable than expected, and failing to deliver the dramatic improvements CEO Sam Altman had promised. The lukewarm reception has forced OpenAI to backtrack on design choices and restore access to previous model versions, raising questions about whether the company can justify its projected half-trillion-dollar valuation amid growing concerns about an AI bubble. What you should know: GPT-5's release has been marked by widespread user dissatisfaction and performance concerns that fall short of OpenAI's ambitious promises. Users complained about the model's "cold" and formal demeanor...

read
Load More