News/Fails

Sep 5, 2024

AI Music Scam Nets $10M in Fraudulent Streaming Royalties

AI-generated music scam uncovered: A music producer has been arrested and charged with multiple felonies for allegedly defrauding streaming platforms of over $10 million in royalties using AI-generated songs. Key details of the case: Michael Smith, 52, from Cornelius, North Carolina, is accused of creating thousands of bot accounts on major streaming platforms like Spotify, Amazon Music, and Apple Music The indictment alleges that Smith used these accounts to automatically stream AI-generated music he had uploaded, generating up to 661,440 streams per day Prosecutors claim Smith developed this scheme to circumvent the platforms' fraud detection systems Evolution of the alleged...

read
Sep 4, 2024

Huawei’s AI Chips Face Performance Hurdles in Nvidia Challenge

AI chip competition intensifies: Huawei's efforts to develop a domestic alternative to Nvidia's AI chips are facing significant challenges, with customers reporting performance issues and difficulties in transitioning from Nvidia products. Huawei's Ascend series has become increasingly popular for running inference in Chinese AI applications, particularly after Washington tightened export controls on high-performance silicon in October. However, industry insiders report that Huawei's chips still lag far behind Nvidia's for initial model training, citing stability issues, slower inter-chip connectivity, and problems with Huawei's software platform, Cann. Nvidia's software platform, Cuda, is widely regarded as the industry standard, known for its ease...

read
Aug 27, 2024

AI Medical Devices Lack Crucial Patient Data, Study Finds

AI medical devices face scrutiny: A comprehensive study reveals that nearly half of FDA-approved AI medical devices lack reported clinical validation data using real patient information, raising concerns about their effectiveness and safety in healthcare settings. Researchers from UNC School of Medicine, Duke University, and other institutions analyzed over 500 AI medical devices approved by the FDA since 2016. The study, published in Nature Medicine, found that approximately 43% of these devices lacked published clinical validation data. Some devices were validated using computer-generated "phantom images" rather than real patient data, failing to meet proper clinical validation requirements. Rapid growth in...

read
Aug 27, 2024

Gannett Shuts Down Reviewed Amid AI Content Controversy

Gannett, the newspaper giant, is closing down its product reviews site Reviewed, amidst controversy over the use of AI-generated content and labor disputes with unionized workers. The big picture: Gannett's decision to shutter Reviewed, effective November 1st, comes after months of scrutiny regarding the authenticity of its product reviews and ongoing conflicts with its unionized workforce. Reviewed offered recommendations for various products, from shoes to home appliances, employing journalists to test and review items. The site had been accused of publishing AI-generated content, which Gannett denied, attributing the questionable articles to a third-party marketing company called AdVon Commerce. Unionized workers...

read
Aug 25, 2024

AI Customer Service Bot Unexpectedly Rickrolls Users

AI assistant surprises with unexpected Rickroll: A startup's AI-powered customer service tool inadvertently linked clients to the infamous "Never Gonna Give You Up" video, sparking discussions about AI behavior and internet culture. The incident unfolds: Flo Crivello, CEO of AI assistant firm Lindy, discovered that one of his company's AI helpers had sent a customer a link to Rick Astley's 1987 hit song instead of a requested video tutorial. The AI, known as a "Lindy," was supposed to help customers learn how to use the platform. When asked for video tutorials, the AI provided a link to the classic Rickroll...

read
Aug 25, 2024

Copilot Falsely Accuses Journalist Who Is Now Suing Microsoft

AI-generated defamation incident: A German journalist, Martin Bernklau, became the victim of false and defamatory statements generated by Microsoft's Copilot AI, raising concerns about the responsibility of AI companies for the content their systems produce. Bernklau, who has decades of experience reporting on criminal trials, discovered that Copilot AI had falsely accused him of various crimes, including child abuse and exploiting widows as an undertaker. The AI system mistakenly attributed crimes Bernklau had reported on to the journalist himself, conflating the reporter with the subjects of his articles. In addition to the false accusations, Copilot also disclosed personal information of...

read
Aug 25, 2024

Coppola’s ‘Megalopolis’ Faces Controversy Over Fake AI-Generated Reviews

AI-generated fake reviews spark controversy: The release of a trailer for Francis Ford Coppola's upcoming film "Megalopolis" has ignited a controversy due to the inclusion of fabricated critic quotes generated by artificial intelligence. Marketing mishap unveiled: Lionsgate, the film's distributor, was forced to pull the trailer after it was discovered that the supposedly negative quotes about Coppola's past films, attributed to renowned critics, were entirely AI-generated and not authentic. The trailer featured fake negative reviews purportedly from famous critics, including Pauline Kael, which were later revealed to be AI-generated content. It is suspected that ChatGPT or a similar AI language...

read
Aug 23, 2024

Google’s AI Image Tool Sparks Controversy Over Inappropriate Content

AI image generation raises ethical concerns: Google's Pixel Studio, an AI image generation tool for the Pixel 9, has come under scrutiny for producing controversial and inappropriate content. Users have been able to generate questionable images using the tool. Digital Trends reported even more concerning results, featuring popular cartoon characters in highly inappropriate scenarios. Examples included cartoon characters wielding firearms, engaging in drunk driving, and wearing Nazi uniforms. Google's response and ongoing challenges: The tech giant has taken steps to address the issue, but concerns about AI-generated content persist. Google has reportedly implemented restrictions on some of the more problematic...

read
Aug 21, 2024

LAUSD Paid Millions for an AI Chatbot That It Can’t Use

LAUSD's AI chatbot investment falters: The Los Angeles Unified School District's multimillion-dollar venture into AI technology has hit a significant roadblock, with its recently acquired chatbot now out of commission. The initial investment: LAUSD made a substantial financial commitment to integrate AI technology into its educational framework, partnering with AllHere to develop and manage a chatbot system. The district allocated millions of dollars to this AI initiative, showcasing its commitment to leveraging advanced technology in education. AllHere, a technology company specializing in educational AI solutions, was chosen as the primary contractor for this ambitious project. The chatbot was designed to...

read
Aug 21, 2024

AI Healthcare Firm’s Disposed Device Exposes Massive Data Breach

Major data breach discovered through discarded device: A significant security lapse has been uncovered involving an AI healthcare company's failure to properly erase sensitive data from disposed equipment. The discovery: An individual obtained a small computer (NUC) from electronic waste that was previously used by an AI healthcare company, revealing a trove of unwiped sensitive information. The hard drive contained approximately 11,000 WAV audio files of customer voice commands, potentially exposing private health-related conversations. Videos from cameras installed in customers' homes were also found, raising serious privacy concerns. Log files detailing information about sensors placed in bathrooms and bedrooms were...

read
Aug 18, 2024

AI Fraud Detection Backfires, Freezing Customer’s £12,800 Transfer

AI-driven fraud detection causes banking headache: The intersection of artificial intelligence and financial security has created unexpected challenges for both banks and their customers, as demonstrated by a recent incident involving Starling Bank and a UK academic. The incident: John MacInnes, an Edinburgh academic, faced significant obstacles when attempting to transfer £12,800 to a long-time friend in Austria, leading to a series of escalating issues with Starling Bank. MacInnes' initial attempt to send €15,000 to assist a friend with cashflow problems was blocked by Starling's fraud detection system. The bank's fraud team made what MacInnes described as "absurd demands" for...

read
Aug 9, 2024

NVIDIA’s AI Training Practices Continue to Spark Copyright Controversy

NVIDIA faces allegations of improperly using copyrighted video content to train its artificial intelligence models, raising questions about the ethics and legality of AI training practices in the tech industry. The core accusation: NVIDIA allegedly downloaded massive amounts of video content from platforms like YouTube and Netflix without permission to train commercial AI projects. The company is said to have downloaded the equivalent of 80 years worth of videos daily for AI model training purposes. This content was reportedly used to develop products such as NVIDIA's Omniverse 3D world generator and "digital human" initiatives. The scale of the alleged downloads...

read
Aug 9, 2024

Meta AI Blunder Exposes Journalist’s Private Number to Strangers

Unexpected AI behavior: Meta's artificial intelligence chatbot has been erroneously distributing a journalist's phone number to strangers, leading to a series of perplexing and unwanted interactions. Rob Price, a Business Insider reporter, discovered his phone number was being shared when he began receiving invitations to random WhatsApp groups. Users were contacting Price under the mistaken belief that they were communicating with Meta AI. The AI chatbot had been instructing users to add it to WhatsApp groups using Price's personal phone number. Potential cause of the mix-up: The incident highlights the complexities and potential pitfalls of training large language models on...

read
Aug 8, 2024

AI Astrology App Exposes 6 Million Users’ Personal Data

Moonly, an AI-powered astrology app, suffered a significant data breach exposing sensitive information of 6 million users, raising serious privacy concerns and highlighting the vulnerabilities in data security practices of popular mobile applications. The scope of the breach: The data leak affected 6 million users of the Moonly astrology app, compromising a wide range of personal information and potentially exposing users to various security risks. The leaked data included users' GPS coordinates, birth dates, email addresses, and other personal details, potentially revealing home and work addresses. Over 90,000 email addresses were exposed in the breach, further compromising users' online identities...

read
Aug 8, 2024

AI-Generated Obituary Spam Sites Exploit Grief for Ad Revenue

Obituary spam sites, fueled by AI-generated content, have become a lucrative business for ad tech companies, raising ethical concerns and causing distress to families of the deceased. This practice exploits the deaths of ordinary individuals for ad revenue, highlighting the darker side of online advertising and content generation. The rise of obituary spam: AI-powered websites are churning out inaccurate and often disturbing obituaries, targeting not just celebrities but everyday people, in a bid to generate ad revenue. Watchdog organization Check My Ads has traced the ad exchanges profiting from these spam sites, revealing a complex network of digital advertising players....

read
Aug 7, 2024

Unhappy Customers Are Returning Humane’s AI Pin in Droves

AI Pin launch stumbles: Humane's ambitious AI Pin wearable device has faced significant challenges since its April launch, with more returns than purchases and widespread negative reviews. Between May and August, the number of AI Pins returned exceeded the number sold, indicating a high level of customer dissatisfaction. Major tech reviewers gave the AI Pin overwhelmingly negative feedback upon its release, contributing to the device's poor market reception. The total value of returned AI Pins has surpassed $1 million, representing a substantial financial setback for Humane. Sales and shipments fall short: The AI Pin's market performance has drastically underperformed Humane's...

read
Aug 7, 2024

Google Has Been Earning Ad Revenue on Non-Consensual Deepfakes

The big picture: Google has been caught accepting payment to promote AI applications that generate nonconsensual deepfake nudes, contradicting its recently announced policies to combat explicit fake content in search results. Uncovering the issue: 404 Media's investigative reporting revealed that Google's search engine was displaying paid advertisements for NSFW AI image generators and similar tools when users searched for terms like "undress apps" and "best deepfake nudes." The discovery highlights a significant discrepancy between Google's stated policies and its actual practices in managing AI-related content. This revelation comes shortly after Google announced expanded policies aimed at addressing non-consensual explicit fake...

read
Aug 5, 2024

Apple AI Email Filter Mistakenly Flags Phishing Scams as Priority

Emerging security concern: Apple's new AI-powered email filter, Apple Intelligence, is reportedly marking phishing scam emails as priority messages, raising concerns about user safety and the effectiveness of AI in email security. The issue was initially reported by Android Authority and corroborated by multiple Reddit users, highlighting a potentially widespread problem with the new feature. Apple Intelligence, currently in beta, appears to prioritize email content over traditional phishing indicators like sender addresses, potentially increasing the risk of users falling for scams. This misclassification adds an unwarranted layer of legitimacy to fraudulent emails, which could lead to more people becoming victims...

read
Aug 5, 2024

Leaked Docs Expose Nvidia’s Massive AI Data Grab From YouTube

The big picture: Recent leaks expose Nvidia's extensive efforts to collect vast amounts of online video content for AI training purposes, raising questions about the scale and ethics of data acquisition in the AI industry. Leaked Slack conversations and emails reveal Nvidia employees discussing plans to scrape videos from popular platforms like YouTube and Netflix for AI training. The scope of the project appears to extend beyond mere research purposes, suggesting a more comprehensive data collection strategy. Project managers outlined plans to utilize Amazon Web Services (AWS) infrastructure to download an astonishing 80 years' worth of video content per day....

read
Aug 3, 2024

Eminem’s Latest AI Music Video Is Getting Very Bad Reviews

The iconic rapper Eminem has released a promotional video for his upcoming album "The Death of Slim Shady (Coup de Grâce)" featuring an AI-generated version of his younger self, sparking mixed reactions due to the questionable quality of the de-aging technology used. AI-powered nostalgia meets technical limitations: The video showcases Eminem interviewing a digitally de-aged version of himself from his "Slim Shady" era, but the execution falls short of expectations: The AI-generated imagery, created by Metaphysic AI, has been widely criticized for its poor quality, with some comparing it unfavorably to low-quality CGI or puppetry. The uncanny valley effect is...

read
Aug 3, 2024

Design Flaw Causes Big Setback for New Nvidia Chip Production

Nvidia's next-generation AI chip faces unexpected production delay, potentially impacting the AI industry and market dynamics. Design flaw disrupts Blackwell B200 chip production: Nvidia, the leading AI chip manufacturer, has encountered a setback in the production of its highly anticipated Blackwell B200 AI chips: The company has reportedly informed Microsoft and at least one other cloud provider about a three-month delay in chip production. The delay is attributed to a design flaw discovered unusually late in the production process, according to sources cited by The Information. Nvidia is now conducting fresh test runs with Taiwan Semiconductor Manufacturing Company (TSMC) to...

read
Aug 3, 2024

Google Pulls Controversial Gemini AI Ad from Olympics Coverage

Advertising Misstep: Google has decided to withdraw its "Dear Sydney" Gemini AI advertisement from the 2024 Paris Olympics broadcast following intense public backlash: The ad featured a father using the Gemini AI assistant to help his daughter write a fan letter to track and field star Sydney McLaughlin-Levrone. Critics argued that the commercial celebrated the worst aspects of AI-powered tools, particularly in the context of personal communication and creativity. Google's attempt to showcase Gemini's capabilities to a skeptical public appears to have backfired, with many viewers finding the ad's premise problematic. [embed]https://www.youtube.com/watch?v=NgtHJKn0Mck[/embed] Corporate Response: Google, a major sponsor of the...

read
Jul 31, 2024

Google’s AI Olympics Ad Sparks Backlash

Key Takeaways: Google's new Olympics commercial showcasing an AI chatbot helping a young girl write a letter to her track star idol has drawn criticism for its portrayal of AI's role in facilitating human connection and creativity: The ad depicts a father using Google's Gemini chatbot to help his daughter write a letter to Olympic gold medalist Sydney McLaughlin-Levrone, with the AI composing the message based on prompts. Critics argue that the ad undermines the authenticity and cuteness of a child writing a personal letter to their idol by inserting a large language model into the process. [embed]https://www.youtube.com/watch?v=NgtHJKn0Mck[/embed] Reactions and...

read
Jul 27, 2024

X’s Grok Has Been Secretly Training on Your Data

X is using user data to train its Grok AI chatbot, sparking privacy concerns as the feature is enabled by default, requiring users to actively opt out. Key details about X's data usage for Grok AI: X's social media platform is utilizing user posts, interactions, inputs, and results with the Grok chatbot to train and fine-tune the AI, which has caused outrage among some users upon discovering the opt-out nature of the feature: X's privacy policy has allowed for this data usage since at least September 2023, but it remains unclear exactly when the data collection for Grok began. While...

read
Load More