News/Fails

Oct 26, 2024

AI weapons scanners fail to detect any guns in NYC subway test

AI-powered subway scanners fall short in New York City pilot: A recent trial of artificial intelligence-driven weapons detection technology in New York City's subway system yielded disappointing results, raising questions about the efficacy and feasibility of such security measures in mass transit. Key findings of the pilot program: The 30-day test of AI-powered scanners across 20 subway stations revealed significant limitations in the technology's ability to accurately detect firearms. The scanners performed 2,749 scans but failed to detect any firearms during the trial period. A concerning 118 false positives were recorded, resulting in a 4.29% false alarm rate. The system...

read
Oct 23, 2024

Humane’s wearable AI device is on fire sale after weak sales

AI Pin price drop amid sales struggles: Humane has significantly reduced the price of its AI Pin device in response to weak sales performance and poor reviews since its launch earlier this year. The base model "eclipse" AI Pin now starts at $499, down from its initial $699 price point. This lower-priced version comes in matte black anodized aluminum but does not include an extra battery or charge case. The charge case exclusion aligns with Humane's recent warning to AI Pin owners about potential fire safety risks associated with certain battery cells from a vendor. Product specifications and pricing tiers:...

read
Oct 21, 2024

TikTok parent fires intern over AI sabotage

ByteDance intern terminated for AI project interference: ByteDance, the parent company of TikTok, has dismissed an intern for sabotaging one of its artificial intelligence (AI) training projects, sparking discussions about the incident's implications and the company's AI initiatives. The unnamed intern was accused of "maliciously interfering" with the training of an AI model, leading to their termination in August. ByteDance rejected claims about the extent of the damage caused, stating that reports contained "some exaggerations and inaccuracies." The company clarified that the intern had no experience with the AI Lab and was part of the advertising technology team. Impact on...

read
Oct 17, 2024

Salesforce CEO blasts Microsoft Copilot as outdated AI assistant

Salesforce CEO challenges Microsoft's AI assistant: Marc Benioff, co-founder and CEO of Salesforce, publicly criticized Microsoft's Copilot AI assistant, comparing it unfavorably to the infamous Clippy assistant from the 1990s. Benioff took to his personal X account to express his disappointment with Copilot, stating that it "doesn't work" and fails to deliver accurate results. He ultimately labeled Copilot as "Clippy 2.0," referencing Microsoft's widely derided Office assistant from 1996. Copilot's evolution and features: Microsoft's AI assistant has undergone significant development since its initial release, expanding its capabilities and reach across various platforms. Copilot was initially designed for Microsoft's Office 365...

read
Oct 16, 2024

AI-powered PCs struggle to deliver on performance promises

AI PCs fall short of performance expectations: Recent benchmarks reveal that AI-powered PCs are struggling to deliver on their promised computational capabilities, particularly in the realm of neural processing units (NPUs). Qualcomm's NPU technology under scrutiny: Pete Warden, a long-time advocate of Qualcomm's NPU technology, has expressed disappointment with the performance of these chips in Windows tablets, specifically the Microsoft Surface Pro running on Arm. Warden's history with Qualcomm includes collaborating on experimental support for their HVX DSP in TensorFlow back in 2017. The promise of up to 45 trillion operations per second on Windows tablets equipped with Qualcomm's NPUs...

read
Oct 15, 2024

Missed the Northern Lights? Meta suggests fabricating images instead

AI-generated image controversy: Meta has sparked outrage on its Threads platform by suggesting users create fake Northern Lights photos using Meta AI, highlighting growing concerns about misinformation and the ethical use of AI image generators. Meta's Threads post, titled "POV: you missed the northern lights IRL, so you made your own with Meta AI," showcased AI-generated images of the Northern Lights over famous landmarks like the Golden Gate Bridge and Las Vegas. The post received significant backlash from users, with some criticizing Meta's apparent disregard for authentic photography and others expressing concern about the potential for spreading misinformation. Ethical implications...

read
Oct 14, 2024

AI-generated testimony exposed in courtroom drama

AI in the Courtroom: A Cautionary Tale: A New York judge's recent decision highlights the potential pitfalls of using AI-generated content in legal proceedings, raising important questions about the role of technology in expert testimony. The case at hand: Judge Jonathan Schopf encountered a troubling situation during a real estate dispute involving a $485,000 rental property in the Bahamas that was part of a trust. The expert witness, Charles Ranson, admitted to using Microsoft's Copilot chatbot to estimate damages, despite lacking relevant real estate expertise. Ranson was unable to recall the specific prompts he used or the sources of information...

read
Oct 9, 2024

The Reflection 70B saga continues with release of training data report

The Reflection 70B controversy unfolds: The AI community has been embroiled in a debate surrounding the Reflection 70B language model, with claims of exceptional performance being met with skepticism and accusations of fraud. Hyperwrite AI's CEO Matt Shumer announced Reflection 70B on September 5, 2024, touting it as "the world's top open-source model" based on benchmark results. Third-party evaluators struggled to replicate the claimed results, leading to widespread doubt and accusations within the AI community. A post-mortem reveals critical oversights: Sahil Chaudhary, founder of Glaive AI, whose data was used to train Reflection 70B, released a comprehensive report addressing the...

read
Oct 7, 2024

New report details massive surveillance network being built by streaming services

Streaming TV's surveillance surge: The Center for Digital Democracy (CDD) has released a comprehensive report detailing the extensive tracking and targeting practices employed by the connected TV (CTV) industry, raising significant privacy and consumer protection concerns. The 48-page report, titled "How TV Watches Us: Commercial Surveillance in the Streaming Era," argues that streaming services and hardware companies have developed an unprecedented "surveillance system" that undermines viewer privacy. The CDD claims that the CTV industry's practices pose severe risks to consumer privacy and protection, going beyond traditional data collection methods. The report highlights how streaming platforms and device manufacturers are leveraging...

read
Oct 7, 2024

Grammarly faces widespread outage: latest updates

Grammarly experiences service disruption: The popular AI writing tool Grammarly has encountered a significant outage, impacting users who rely on its features for various writing tasks. Current status and company response: Grammarly has acknowledged the service disruption on its status page, stating that a fix has been implemented and the results are being monitored. The company's support team is actively working to restore full functionality and has assured users that they are investigating the issue. While the outage affected many users, the service should now be back online for most people. Impact on users: The outage has temporarily affected users...

read
Oct 4, 2024

Google AI’s dangerous advice sparks SUV safety concerns

AI-powered search raises safety concerns: Google's recent rollout of ads in its AI-powered search overviews has highlighted potential risks associated with AI-generated advice, particularly in the realm of vehicle safety features. The AI search system suggested turning off the forward collision-avoidance feature on the Kia Telluride by disabling electronic stability control, a recommendation that could be dangerous for most drivers. This incorrect advice appears to stem from a misinterpretation of a caution notice in the Kia EV6 manual, demonstrating the AI's inability to accurately contextualize and understand the information it processes. Implications for AI reliability: The incident underscores the ongoing...

read
Oct 3, 2024

The most famous AI failures that shook the tech world

AI's growing pains: Recent high-profile missteps highlight challenges and risks; As artificial intelligence becomes increasingly integrated into various sectors, a series of notable failures underscores the technology's current limitations and potential pitfalls. McDonald's abandoned its AI-powered drive-thru ordering system in June 2024 following customer complaints about order misunderstandings, illustrating the challenges of implementing AI in customer-facing roles. Elon Musk's Grok AI chatbot made headlines in April 2024 for falsely accusing NBA star Klay Thompson of vandalism, demonstrating the potential for AI to spread misinformation. New York City's MyCity chatbot provided incorrect and illegal advice to business owners in March 2024,...

read
Sep 26, 2024

AI legal startup hit with $193,000 FTC fine in tech crackdown

AI company faces legal consequences: DoNotPay, a company claiming to offer the "world's first robot lawyer," has agreed to a $193,000 settlement with the Federal Trade Commission (FTC) for misleading consumers about its AI-powered legal services. The settlement is part of Operation AI Comply, a new FTC initiative aimed at cracking down on companies using AI to deceive or defraud customers. DoNotPay claimed its AI could replace human lawyers and generate valid legal documents, but the FTC found these claims were made without proper testing or evidence. The company allegedly told consumers they could use its AI service to sue...

read
Sep 25, 2024

FTC cracks down on DoNotPay, other companies for deceptive AI practices

FTC launches crackdown on AI-powered companies: The Federal Trade Commission has initiated "Operation AI Comply," targeting five companies accused of using artificial intelligence deceptively or harmfully. The FTC's action underscores its commitment to ensuring AI-marketed products and services provide real value and don't exploit consumers with false promises. The agency is taking legal and regulatory actions against companies found to be engaging in deceptive practices related to AI. Companies in the crosshairs: Five companies have been targeted by the FTC for alleged misuse of AI technology in their products and services. DoNotPay, which claimed to offer AI-powered legal services, has...

read
Sep 24, 2024

ServiceNow Outage Sparks Reliability Concerns After SSL Certificate Expires

ServiceNow faces widespread disruption: A critical SSL certificate expiration affected over 600 organizations, disrupting key services and causing frustration among customers of the enterprise cloud vendor. The root cause: The expired MID Server Root G2 SSL certificate led to connectivity failures across multiple ServiceNow services, impacting critical operations for many businesses. The issue affected Orchestration, Discovery, and AI-powered functions like Virtual Agent. Instance upgrades, update set retrievals, and instance-to-instance communications were also compromised. ServiceNow confirmed that 616 customers were affected by the outage. Customer impact and reaction: The disruption sparked significant frustration among ServiceNow's user base, with many voicing their...

read
Sep 23, 2024

LinkedIn AI Backlash Highlights Need for EU-Like Privacy Protections

LinkedIn's AI training sparks privacy concerns: LinkedIn's decision to use user data for training its AI tools has ignited a debate about data privacy and user consent in the tech industry. The professional networking platform has begun using member data to improve its AI capabilities, a move that has drawn criticism from users concerned about privacy and transparency. This decision follows similar actions by other tech giants like Meta (Facebook, Instagram) and X (Twitter), who have also leveraged user data for AI development. Notably, LinkedIn has excluded users in the European Union, European Economic Area, and Switzerland from this data...

read
Sep 20, 2024

Bugs in Meta’s AI Ad Tools are Costing Customers a Lot of Money

Meta's ad platform faces widespread issues: Media buyers report significant challenges with Meta's advertising tools, including targeting errors and budget overruns, potentially costing clients substantial amounts of money. Ongoing platform bugs and disruptions: Meta's ad platform has experienced numerous issues since Memorial Day, with some buyers encountering problems as frequently as every other week. A Meta-maintained website tracking Facebook Ads Manager issues has noted at least a dozen disruptions since Memorial Day, including four classified as "major disruptions." Media buyers report various problems, such as improper ad loading and daily budgets being depleted within hours. Some advertisers estimate losses in...

read
Sep 14, 2024

Shein is the Biggest Polluter in Fashion, and AI is Making the Problem Worse

Fast fashion giant's carbon footprint soars: Shein, the rapidly expanding fast fashion company, has nearly doubled its carbon dioxide emissions in just one year, positioning itself as the fashion industry's largest polluter. In 2023, Shein's total CO2 emissions reached a staggering 16.7 million metric tons, surpassing the annual output of four coal power plants. This significant increase in emissions outpaces the company's revenue growth, raising concerns about the sustainability of its business model. The company's carbon footprint has grown to such an extent that it now exceeds that of its competitors in the fashion industry. AI-driven supply chain management: Shein...

read
Sep 12, 2024

Google Pulls Gemini AI Video Amid Accuracy Concerns

Google's Gemini AI demo video scrutinized: Google has voluntarily unlisted a promotional video for its Gemini AI model following an inquiry by an advertising watchdog regarding the accuracy of the demo's depiction of the AI's capabilities. The video, posted in December, showcased Gemini responding to various spoken prompts, including identifying parts of drawings and creating a geography game on the fly. BBB National Programs' National Advertising Division (NAD) questioned whether the video accurately represented Gemini's performance in responding to user voice and video prompts. Google chose to end the inquiry by ceasing promotion of the video, effectively acknowledging potential discrepancies...

read
Sep 11, 2024

Reflection 70B Developer Breaks Silence on Fraud Accusations

The big picture: Matt Shumer, CEO of OthersideAI, faces accusations of fraud following the release of Reflection 70B, a large language model that failed to replicate its initially claimed performance in independent tests. Shumer introduced Reflection 70B on September 5, 2024, claiming it was "the world's top open-source model" based on impressive benchmark results. Independent evaluators quickly challenged these claims, unable to reproduce the reported performance and raising concerns about the model's authenticity. The controversy has sparked discussions about transparency, validation processes, and ethical considerations in AI model development and release. Timeline of events: The Reflection 70B saga unfolded rapidly,...

read
Sep 10, 2024

Inside Google’s Now Shuttered Mission to Create an AI-Powered Robot

A decade-long quest for AI-powered robots: Google's Everyday Robots project, led by Hans Peter Brondmo from 2016 to 2023, aimed to create intelligent machines capable of working alongside humans in everyday settings. The project, part of Google X (Alphabet's moonshot division), focused on developing robots that could perform various tasks in unpredictable real-world environments. Despite making significant progress, the project was ultimately shut down in January 2023 due to cost concerns, highlighting the challenges of long-term, complex robotics initiatives. Key challenges in robotics development: The Everyday Robots team faced several obstacles in their pursuit of creating versatile, AI-powered robots for...

read
Sep 9, 2024

AI Model Sparks Fraud Allegations as Benchmark Claims Unravel

AI model controversy erupts: The release of Reflection 70B, touted as the world's top open-source AI model, has sparked intense debate and accusations of fraud within the AI research community. HyperWrite, a small New York startup, announced Reflection 70B as a variant of Meta's Llama 3.1 large language model (LLM) on September 6, 2024. The model's impressive performance on third-party benchmarks was initially celebrated but quickly called into question. Performance discrepancies emerge: Independent evaluators have failed to reproduce the claimed benchmark results, raising doubts about Reflection 70B's capabilities and origins. Artificial Analysis, an independent AI evaluation organization, reported that their...

read
Sep 6, 2024

Why This Startup Banned The Use Of AI Chatbots

The AI chatbot dilemma in documentation: Mux, a video technology company, recently tested AI chatbots for their documentation but ultimately decided against implementing them due to concerns about accuracy and potential user confusion. Mux explored AI chatbot solutions to enhance their documentation experience, hoping to provide tailored answers to user queries and bridge gaps in their information architecture. The company's initial tests with AI chatbots trained on their documentation and blog posts yielded disappointing results, with responses that were often inaccurate or misleading. Mux's team was particularly concerned about the chatbots' inability to provide nuanced information about complex topics, such...

read
Sep 5, 2024

Microsoft’s AI-Powered PCs Struggle with Gaming Performance

AI-powered PCs face gaming challenges: Microsoft's new Copilot+ PCs, designed for AI tasks and long battery life, are encountering significant issues with gaming performance due to their Arm-based architecture. The Copilot+ PCs use Qualcomm Snapdragon chips that combine CPU, GPU, and Neural Processing Unit capabilities, but this Arm-based design is incompatible with many popular PC games built for x86 architecture. Microsoft developed Prism, a translation layer similar to Apple's Rosetta 2, to enable x86 apps to run on Arm-based Windows machines, but its effectiveness for gaming has been limited. In a test of 1,300 PC games, only half ran without...

read
Load More