News/Fails

Jul 25, 2024

AI Startup RunwayML Accused of Secretly Training on YouTubers’ Content

The AI video startup Runway faces backlash following a report that it copied training data from thousands of YouTube videos without permission, raising concerns about the company's practices and the broader issue of AI models being trained on copyrighted content. Key details from the leaked spreadsheet: A former Runway employee allegedly leaked a company spreadsheet to 404 Media, revealing plans to categorize and train on YouTube content from various sources: The spreadsheet included over 3,900 individual YouTube channels, with hashtags indicating the type of content. Channels ranged from media companies like The New Yorker, VICE News, and Netflix to individual...

read
Jul 25, 2024

Meta’s Oversight Board Exposes Flaws in Instagram’s Deepfake Moderation

Meta's Oversight Board found that Instagram failed to promptly take down an explicit AI-generated deepfake of an Indian public figure, revealing flaws in the company's moderation practices. Key findings and implications: The Oversight Board's investigation reveals that Meta's approach to moderating non-consensual deepfakes is overly reliant on media reports, potentially leaving victims who are not public figures more vulnerable: Meta only removed the deepfake of the Indian woman and added it to its internal database after the board began investigating, while a similar deepfake of an American woman was quickly deleted. The board expressed concern that "many victims of deepfake...

read
Jul 25, 2024

Anthropic’s Web Crawler Is Apparently Ignoring Companies’ Anti-Scraping Policies

Anthropic's ClaudeBot crawler hits iFixit's website almost a million times in 24 hours, ignoring the company's anti-scraping policies. This raises questions about AI companies' data scraping practices and the limited options available for websites to protect their content. Key details of the incident: iFixit CEO Kyle Wiens revealed that Anthropic's ClaudeBot web crawler accessed the website's servers nearly a million times within a 24-hour period, seemingly violating iFixit's Terms of Use: iFixit's Terms of Use explicitly prohibit reproducing, copying, or distributing any content from the website without express written permission, including using the content for training machine learning or AI...

read
Jul 19, 2024

WIRED Journalists Recount Their Harrowing AI-Led City Tour

Two journalists embarked on an AI-generated city tour, with chaotic and unexpected results that highlight the current limitations of AI travel planning. The AI tour guide: The writers used Littlefoot, an AI-powered local discovery chatbot, to plan their respective tours in London and New York, each with a $100 budget and specific preferences. Despite Littlefoot's claims of using advanced AI models and information sources, the itineraries generated were often impractical, with recommendations that were either too niche, too vague, or not feasible due to time, location, or budget constraints. The AI struggled with basic details like restaurant opening hours, distances...

read
Jul 19, 2024

Figma’s AI Design Tool Temporarily Removed After Inadvertently Including Copyrighted Assets

Figma's AI design tool mistakenly included copyrighted assets, leading to its temporary removal for additional quality control measures. Key issues with Figma's AI tool: The "Make Designs" feature, launched as part of Figma's Config event, generated outputs suspiciously similar to Apple's weather app when given certain prompts: A user discovered that asking the AI to design a weather app would produce designs closely resembling Apple's own app, potentially exposing users to legal issues. This finding suggested that Figma may have inadvertently trained the AI model on Apple's proprietary designs, despite CEO Dylan Field's initial statement denying the tool was trained...

read
Jul 18, 2024

Musk’s AI Calls Trump “Pedophile” Despite Billionaire’s Endorsement, Exposing Safeguard Failures

Despite Elon Musk's endorsement of Donald Trump following a recent assassination attempt, Musk's "anti-woke" AI chatbot Grok has been promoting claims that Trump is a "pedophile" and "wannabe dictator" while referring to the former president as "Psycho." Grok's problematic outputs exposed by Global Witness: The nonprofit Global Witness analyzed Grok's responses to queries about the 2024 US election and found deeply concerning results: Grok repeated or appeared to invent racist tropes about Vice President Kamala Harris, describing her as "a greedy driven two bit corrupt thug" with a laugh like "nails on a chalkboard." The chatbot referred to Trump as...

read
Jul 18, 2024

Apple Denies Using Unethical Data, Commits to Responsible AI Development

Apple refutes claim it used unethical data to train Apple Intelligence, affirming its commitment to using only ethically sourced data for its AI projects. Apple's response to allegations: While Apple had used data from a controversial dataset called "the Pile" in the past, it was only for research purposes and not for training Apple Intelligence: Apple stated the Pile data was used solely to train OpenELM research models released in April, which do not power any consumer-facing AI or machine learning features. The company has no plans to build new versions of OpenELM and emphasized that the models were never...

read
Jul 18, 2024

Figma Disables AI Design Tool After Generating Copies of Existing Apps

Shortly after launching Make Designs in limited beta, Figma learned that an issue with the feature's underlying design system resulted in mocks that resembled existing apps: New components and example screens added to the design system prior to the launch were not vetted carefully enough, with some assets being similar to aspects of real-world applications. When prompted for certain apps, such as a weather app, Make Designs generated designs that felt very similar to existing first-party apps due to these problematic assets. Figma's response: Upon identifying the problem was with the underlying design systems, Figma took swift action: The assets...

read
Jul 15, 2024

Google’s Gemini Caught Scanning Private Documents

Google's Gemini AI caught scanning private Google Drive documents without user consent, raising privacy concerns amid the tech industry's AI push. User discovers Gemini AI scanning private files: Kevin Bankston, a Senior Advisor on AI Governance, took to Twitter to share his experience of Google's Gemini AI automatically summarizing his tax return stored in Google Drive without his permission: Bankston was surprised to find that Gemini had ingested and summarized his private document, despite not explicitly asking for this feature. The incident raises serious questions about the extent of control users have over their sensitive information and Google's handling of...

read
Jul 12, 2024

Hong Kong Firm Relaunches Tech Blogs with Fake Content, Stolen Identities

In a brazen misuse of AI technology, a Hong Kong-based web advertising firm has relaunched classic tech blogs like The Unofficial Apple Weblog (TUAW) and iLounge, populating them with AI-generated content falsely attributed to the original writers. Unethical use of AI and stolen identities: Web Orange Limited, the company behind the relaunch, claims to have purchased the domain names and brand identities but not the original content. They have used AI to reword old articles and generate new ones, attaching the names of former writers without their consent: Christina Warren, a former TUAW writer now at GitHub, discovered her name...

read
Jul 9, 2024

Amazon’s Alexa AI Update Delay Sparks Concerns Amid Fierce Competition

Amazon's missing Alexa AI update raises questions: Despite announcing an AI-powered, conversational Alexa last year, Amazon has yet to deliver on its promise, leaving many wondering about the tech giant's AI strategy and progress. Falling behind in the AI race: As competitors like Google, Apple, and OpenAI make significant strides in AI-powered assistants, Amazon's Alexa appears to be stagnating: Google unveiled its powerful Gemini AI, while Apple introduced Apple Intelligence at WWDC 2024, promising a smarter, more conversational Siri. If Apple succeeds in integrating Apple Intelligence into its new iPhones this fall, Siri may reclaim its digital assistant leadership position...

read
Jul 6, 2024

ChatGPT Mac App’s Security Flaw Exposes User Data, Prompting Update

Serious security flaw discovered in ChatGPT's Mac app: OpenAI's recently launched desktop app for Mac was found to be storing user conversations in plain text, potentially exposing sensitive data to unauthorized access. Lack of sandboxing and encryption: The app's security vulnerabilities were highlighted by a user on the social media platform Threads: The app was not sandboxed, meaning it could access private user data without explicit permission, bypassing macOS's built-in defenses that have been in place since version 10.14 (Mojave). User conversations with ChatGPT were stored in plain text in an unprotected location, making them accessible to any running app,...

read
Jul 3, 2024

Perplexity’s AI Upgrade Marred by Plagiarism Accusations Amid Enhanced Search Capabilities

Perplexity, an AI search startup facing ethical questions, has launched a significant upgrade to its Pro Search tool, claiming it can now handle more complex queries and provide in-depth answers by breaking down problems step-by-step. Enhanced capabilities: Pro Search's new features aim to improve its ability to tackle advanced research and mathematical tasks: The tool can now understand when a question requires planning and work through goals in a step-by-step manner, synthesizing detailed answers more efficiently. Examples showcased by Perplexity demonstrate Pro Search's ability to break down complex queries, such as determining the best time and locations to view the...

read
Jul 3, 2024

AI-Generated Spam Plagues Google News, Outranking Original Reporting

A recent Google search for "adobe train ai content" revealed that an AI-generated spam article plagiarizing WIRED's original reporting was outranking the legitimate story in Google News results. Despite Google's recent algorithm changes and spam policies aimed at improving search quality, the prevalence of AI-generated spam in news results remains a significant issue. Key details of the AI spam article: The spammy website, Syrus #Blog, had copied WIRED's article with only slight changes to the phrasing and a single hyperlink at the bottom serving as attribution: The plagiarized content appeared in 10 other languages, including many that WIRED produces content...

read
Jul 3, 2024

Figma’s AI Design Controversy: Apple Similarity Sparks Questions, Prompts Changes

Figma's new generative AI feature, Make Designs, has been pulled after producing designs strikingly similar to Apple's iOS weather app, raising questions about the tool's training data and the company's AI development process. Figma's response and the issue's root cause: Figma CEO Dylan Field and CTO Kris Rasmussen addressed the controversy, revealing key details about the AI tool's development: Figma did not train the AI models used in Make Designs, relying instead on "off-the-shelf models and a bespoke design system." The company attributes the issue to insufficient variation in the commissioned design system, rather than the training data. Rasmussen stated...

read
Jun 27, 2024

Deceptive “AI Washing” Trend Threatens Trust in Genuine AI Innovation

The rise of "AI washing" is causing companies to overstate their AI capabilities, potentially eroding trust and making it harder for investors to identify truly innovative firms. Key Takeaways: AI washing refers to companies making over-inflated claims about their use of AI, such as using less sophisticated computing while claiming to use AI or overstating the effectiveness of their AI solutions. The phenomenon is driven by competition for funding and the desire to appear cutting-edge, with the percentage of tech start-ups mentioning AI in their pitches rising from 10% in 2022 to an expected 35% in 2024. The lack of...

read
Jun 24, 2024

TikTok’s AI Avatar Mishap: Hitler Quotes and Misinformation Raise Alarms

TikTok's AI digital avatar tool accidentally released without guardrails, allowing users to create misleading videos with paid actors reciting anything from Hitler quotes to dangerous misinformation. The incident raises concerns about the potential for abuse and the need for robust content moderation as AI-generated content becomes more prevalent on social media platforms. Key details of the incident: TikTok mistakenly posted a link to an internal version of its new AI digital avatar tool, which allowed users to generate videos without any content restrictions: CNN was able to create videos using the tool that contained quotes from Hitler, Osama bin Laden,...

read