News/Fails
The biggest AI failures of 2024 point exactly to where it needs most improvement
High-profile failures and glitches: Major AI platforms experienced significant technical issues and public relations challenges throughout the year, raising questions about their reliability and readiness for widespread deployment. ChatGPT suffered a notable malfunction where it began generating nonsensical responses, with users describing the AI as "going insane" Air Canada faced legal consequences after their customer service chatbot provided incorrect refund information to customers Google's AI Overview feature delivered dangerous misinformation, including the potentially harmful advice that rocks were safe to eat AI-generated content controversies: The limitations of AI-generated content became evident through several high-profile incidents that exposed the technology's current...
read Dec 24, 2024AI-generated bug reports are overwhelming open source projects
Open source software maintainers are experiencing a surge in AI-generated bug reports that drain resources and volunteer time while providing little value. Key developments: The Python Software Foundation and Curl project maintainers have raised alarms about an influx of low-quality, AI-generated bug reports that appear legitimate but waste valuable time to investigate and refute. Seth Larson, security developer-in-residence at the Python Software Foundation, published a blog post warning against using AI systems for bug hunting Daniel Stenberg, who maintains the widely-used Curl data transfer tool, reports spending considerable time dealing with "AI slop" bug reports and confronting users who likely...
read Dec 22, 2024Autonomous race car crashes at Abu Dhabi Racing League event
An autonomous race car crashed during a warm-up lap at Japan's Suzuka Circuit, preventing a planned competition between artificial intelligence and former F1 driver Daniil Kvyat from taking place. The incident details: The Abu Dhabi Autonomous Racing League (A2RL) organized the event to showcase the current capabilities of self-driving race cars. The autonomous vehicle, carrying 95 kg of computers and sensors, lost control during preliminary laps Cold tires and track conditions were identified as the primary factors in the crash The accident occurred before any actual racing could begin against the human competitor Technical specifications and limitations: A2RL provides competing...
read Dec 21, 2024Reporters Without Borders calls on Apple to remove its AI news summaries
The growing prevalence of AI-generated misinformation is creating serious concerns among journalism organizations and media watchdogs, as demonstrated by recent false news summaries produced by Apple's AI notification system. Core incident details: Apple's recently launched AI feature called Apple Intelligence has come under fire for generating multiple false news summaries that spread misinformation. The system incorrectly claimed that UnitedHealth shooting suspect Luigi Mangione had died by suicide, when he is actually alive and awaiting trial Another erroneous summary falsely stated that Israeli Prime Minister Benjamin Netanyahu had been arrested based on New York Times coverage The BBC has formally complained...
read Dec 20, 2024Google accused of lowering the standards for Gemini output reviews
The role of human oversight in AI development has come under scrutiny as Google adjusts its approach to evaluating Gemini's performance. Recent policy shift: Google has modified its guidelines for contractors who review Gemini AI outputs, marking a significant change in how artificial intelligence responses are evaluated. GlobalLogic, a contractor working with Google, now instructs reviewers to rate AI responses even when they lack domain expertise Previously, reviewers were directed to skip tasks requiring specialized knowledge in areas like coding or mathematics The new policy requires reviewers to "rate the parts of the prompt you understand" while noting their knowledge...
read Dec 19, 2024Apple faces backlash over AI-generated false headline
Critical incident: Apple's new AI summarization tool has generated a false headline about a murder suspect, prompting a formal complaint from the BBC and sparking broader concerns about AI reliability in news reporting. The AI-powered feature incorrectly suggested that murder suspect Luigi Mangione had shot himself, which was untrue The error appeared in a grouped notification that otherwise accurately summarized other news stories about Syria and South Korea This incident was not isolated, as the system also incorrectly suggested Israeli Prime Minister Netanyahu had been arrested when summarizing a New York Times article Technical context: Apple Intelligence is a new...
read Dec 18, 2024Luigi Mangione chatbots on CharacterAI call for more CEO slayings
The proliferation of AI chatbots imitating Luigi Mangione, the alleged murderer of UnitedHealthcare CEO Brian Thompson, highlights growing concerns about content moderation on AI platforms and the romanticization of violent acts against healthcare executives. Current developments: Character.AI, a popular chatbot platform, has become host to numerous AI personalities based on Mangione, with some encouraging further violence against healthcare executives. Over 10,000 conversations were recorded with the three most popular Mangione-based chatbots before their deactivation on December 12 Some chatbots remain active despite Character.AI's stated policy against promoting violence or dangerous conduct Similar Mangione-inspired chatbots have appeared on other platforms, including...
read Dec 18, 2024Nvidia’s new app may be slowing down your PC games
The recent replacement of Nvidia's GeForce Experience with the new Nvidia App has led to unexpected performance issues affecting gaming frame rates, even when the app's new features aren't actively being used. Key performance impact: Testing reveals that the new Nvidia App's background processes are causing significant frame rate reductions across multiple high-end games. Frame rate testing shows drops of 3-6% across various games and settings when running the Nvidia App Assassin's Creed Mirage experienced the most severe impact with a 12% frame rate reduction at 1080p Ultra settings Other affected games include Baldur's Gate 3, Black Myth: Wukong, Flight...
read Dec 17, 2024The 8 worst technology failures of 2024 according to MIT
As 2024 draws to a close, several high-profile technological setbacks and controversies — from AI ethics concerns to space exploration challenges — offered sobering lessons about the complex interplay between innovation, responsibility, and sustainable business practices. Here is MIT's 2024 list of biggest tech failures. Google Gemini's AI Image Generation Controversy Google's Gemini AI tool generated historically inaccurate diverse images The incident sparked widespread criticism over AI bias and historical accuracy Google temporarily disabled the feature following public backlash Boeing Starliner Spacecraft Malfunction Technical issues with Boeing's spacecraft left NASA astronauts stranded on the International Space Station The failure raised...
read Dec 16, 2024Skechers stays silent on AI-generated ad controversy
The use of artificial intelligence in advertising has sparked controversy as major brands experiment with AI-generated content, potentially risking consumer backlash and brand reputation. Recent controversy: Skechers faces criticism over a full-page advertisement in Vogue's December issue that appears to use AI-generated artwork. The ad displays common AI generation artifacts including distorted faces, illegible text, and inconsistent clothing details A viral TikTok video by content creator polishlaurapalmer brought attention to the apparent use of AI in the advertisement Skechers has remained silent on the matter, declining to respond to media inquiries about the use of AI in their marketing Technical...
read Dec 16, 2024UnitedHealth AI glitch exposes claims-judging tool to public
The healthcare industry continues to grapple with AI implementation challenges as UnitedHealth Group faces scrutiny over an accidentally exposed claims-processing chatbot. The security breach: A chatbot used by UnitedHealth's Optum Rx pharmacy benefit manager to process insurance claims and disputes was inadvertently made public and accessible to anyone with its IP address. The exposed system, called "SOP Chatbot," was designed to handle standard operating procedure queries for employees Employee interaction logs revealed questions about policy renewal dates and claim determinations Cybersecurity researcher Mossab Hussein, co-founder of spiderSilk, discovered and reported the privacy breach Company response: UnitedHealth quickly locked down access...
read Dec 16, 2024Apple AI mistakenly reports suicide of Luigi Mangione
Apple has faced significant criticism after its newly launched AI feature made false claims about high-profile news events, highlighting ongoing concerns about AI's reliability in news dissemination. Initial incident and context: Apple Intelligence, the company's generative AI feature, recently distributed false notifications to UK users about two major news stories, raising serious concerns about AI's role in news distribution. The AI incorrectly claimed that murder suspect Luigi Mangione had shot himself when summarizing BBC coverage In a separate incident, the system falsely reported that Israeli Prime Minister Benjamin Netanyahu had been arrested when discussing an International Criminal Court warrant These...
read Dec 15, 2024YouTube AI tool’s nonsensical responses are frustrating content creators
YouTube's introduction of AI-powered reply suggestions has revealed significant limitations in the technology's ability to generate coherent responses for content creators. Initial rollout and functionality: YouTube's "editable AI-enhanced reply suggestions" feature, announced in September, has now become visible to users through demonstrations by content creators. YouTuber Clint Basinger's experience showcases the tool's current implementation and limitations The AI system appears to recognize basic channel themes like gaming and gadget reviews The feature is designed to help content creators manage comment responses more efficiently Current limitations: The AI tool's performance falls notably short of expectations, generating responses that lack contextual relevance...
read Dec 15, 2024Sora AI video goes viral for being creepy — here’s why these anomalies happen
OpenAI's Sora AI video generator produced a surreal and technically flawed video of a gymnast performing impossible movements, including sprouting extra limbs and temporarily losing her head during what was meant to be an Olympic-style floor routine. Technical breakdown of the issue: The video synthesis errors stem from Sora's fundamental approach to generating content through statistical associations rather than true understanding of physics or human anatomy. Sora creates videos by analyzing training data that pairs video footage with text descriptions The system makes continuous next-frame predictions based on the previous frame While Sora attempts to maintain coherency by looking ahead...
read Dec 14, 2024Lawsuit claims Photobucket sold user biometric data illegally
Photobucket faces a class action lawsuit over allegations of selling users' biometric data to AI companies without proper consent, potentially affecting up to 100 million users and billions of photos stored on the platform since 2003. Key allegations and scope: The lawsuit targets Photobucket's recent privacy policy update that revealed plans to sell users' photos, including biometric data like face and iris scans, to AI training companies. Two distinct classes are represented: users who uploaded photos between 2003 and May 2024, and non-users whose images appear in uploaded photos The company claimed access to approximately 6.5 billion public images eligible...
read Dec 11, 2024Rapper 50 Cent shares fake AI video of Jay-Z and Diddy being arrested
There's new controversy in the hip-hop community as rapper 50 Cent uses artificial intelligence to mock fellow artists facing serious legal challenges. Latest developments: Rapper 50 Cent (Curtis Jackson) has shared an AI-generated video depicting Jay-Z and Diddy being arrested, amid serious legal allegations against both music moguls. The artificial intelligence video shows both men in tuxedos being arrested at a party and transported to jail while holding wine glasses Jackson captioned the post with a joke about potential retaliation: "I want to post this but I'm afraid I'm gonna get shot" Social media reactions were mixed, with some followers...
read Dec 11, 2024Scammers appropriate website of defunct Oregon paper to publish AI slop
The proliferation of AI-generated fake news websites is threatening local journalism, as demonstrated by scammers who hijacked the defunct Ashland Daily Tidings' digital presence to create a fraudulent news operation. The takeover scheme: A group of scammers appropriated the website of the Ashland Daily Tidings, a historic Oregon newspaper that closed in 2023 after operating since 1876, to create a deceptive news operation. The fraudulent website claimed to employ eight reporters, but investigation revealed these were either fictional personas or stolen identities Content was primarily AI-generated, consisting of plagiarized local news stories that were automatically rewritten The operation aimed to...
read Dec 11, 2024Itch.io’s temporary shutdown shows importance of humans overseeing automation
The growing pains of AI automation in brand protection led to the temporary shutdown of a major indie gaming platform, highlighting the risks of removing human oversight from automated systems. The incident overview: Popular indie gaming platform Itch.io experienced a complete domain shutdown due to an automated AI brand protection system's error. The shutdown occurred on December 9, 2024, when Funko Pop's BrandShield service flagged a fan page on Itch.io as fraudulent The platform's creator, Leafo, confirmed that despite complying with the initial takedown request, the domain registrar's automated system proceeded to deactivate the entire domain The site remained offline...
read Dec 9, 2024Kids’ robot maker shuts down, leaving AI toys lifeless
The closure of AI company Embodied highlights the risks and emotional impact of cloud-dependent AI products, particularly those designed for vulnerable users like children with autism. The shutdown situation: Embodied, creator of the AI-powered social robot Moxie, announced its closure following financial difficulties and withdrawn funding. The Moxie robot, priced at $799, was specifically designed to interact with autistic children The device relied on cloud-based large language models (LLMs) for its core functionality, including conversation and question-answering capabilities Within days of the company's closure, all Moxie units will cease functioning, with no refunds offered to customers Technical implications: The dependence...
read Dec 9, 2024Gaming platform Itch.io recovers from AI anti-phishing mishap
The indie game platform itch.io experienced a brief but significant service disruption due to an automated brand protection system's misidentification of potential trademark infringement. The incident overview: A domain takedown affected itch.io for several hours on Monday morning, stemming from an AI-powered brand protection system's report about alleged phishing activities. The shutdown was triggered by BrandShield, a brand protection service working on behalf of Funko, the company known for Funko Pop collectible figures The domain registrar, iwantmyname, disabled itch.io's domain despite the platform having already addressed the initial complaint Users could still access the site directly through its IP address...
read Dec 7, 2024Austin city council officials respond to AI-generated racial comment
The rise of virtual city council meetings during the pandemic has created new challenges for local governments grappling with artificial intelligence-generated public comments, as demonstrated by a recent incident in Austin, Texas. The incident: A racially targeted AI-generated public comment was submitted during an Austin City Council meeting on November 7, prompting officials to address vulnerabilities in their public comment system. The comment self-identified as AI-generated at its conclusion The City Council allocates 10 three-minute intervals for public comments during each meeting Austin Mayor Kirk Watson publicly addressed the incident on the City Council message board, emphasizing the city's commitment...
read Dec 4, 2024AI firm Evolv accused of exaggerating AI weapon detection claims
The rapid adoption of AI-powered security technology in schools and public spaces has come under scrutiny as regulators crack down on companies making unsubstantiated claims about their capabilities. Critical findings: The Federal Trade Commission has determined that Evolv Technologies made misleading claims about its AI-powered security scanners' ability to detect weapons while ignoring harmless items. The company's scanners, deployed since 2021 across venues like schools, sporting events, and public transit, demonstrated a concerning 95% false positive rate during NYC subway trials School districts invested millions in the technology based on claims that it could definitively detect both concealed and openly...
read Dec 4, 2024Stanford professor admits ChatGPT added false information to his court filing
The use of AI tools in legal and academic contexts faces new scrutiny after a prominent misinformation researcher acknowledged AI-generated errors in a court filing. The core incident: Stanford Social Media Lab founder Jeff Hancock admitted to using ChatGPT's GPT-4o model while preparing citations for a legal declaration, resulting in the inclusion of fabricated references. The document was filed in support of Minnesota's "Use of Deep Fake Technology to Influence an Election" law The law is currently being challenged in federal court by conservative YouTuber Christopher Khols and Minnesota state Rep. Mary Franson Attorneys for the challengers requested the document...
read Dec 2, 2024ChatGPT is baffling users by refusing to say certain people’s names
ChatGPT has mysteriously stopped responding to prompts containing certain specific names, raising questions about content filtering and transparency in AI systems. Core issue identification: ChatGPT, OpenAI's popular language model, consistently returns error messages when asked to process or generate responses containing specific full names, including "David Mayer" and several others. Users across social media platforms have documented the AI's inability to combine certain first and last names, even though it can say the names individually The restriction appears to affect multiple versions of ChatGPT, including GPT-4 Other AI chatbots like Google Gemini and Grok have no difficulty processing these same...
read