News/Fake News
This new AI tool will tell you if images, texts and videos are AI-generated
The rise of artificial intelligence tools for creating digital content has sparked the development of detection solutions to help users verify the authenticity of online media. The Hive AI Detector is as a free Chrome extension designed to identify AI-generated images, videos, and text while browsing the web. Key Features and Functionality: The Hive AI Detector operates as a background process in Chrome, providing instant analysis of digital content without requiring user registration or personal information. The tool can scan images in jpg, png, and webp formats, as well as videos and text content Users receive percentage-based likelihood scores indicating...
read Feb 6, 2025Russian TV duped by hoax about DeepSeek’s Soviet Era Inspiration
Russia's state television broadcasted a satirical news story claiming China's DeepSeek AI was based on Soviet-era code, highlighting ongoing cultural nostalgia for past technological achievements. The key development: A fake interview published by Russian satirical website Panorama, falsely attributing DeepSeek's AI technology to 1985 Soviet programming, was broadcast as legitimate news on state-run Rossiya One television channel. The fabricated story featured a fictional interview with DeepSeek founder Liang Wenfeng praising Soviet programmers The report claimed the AI code originated from work by Viktor Glushkov, a pioneer who created the first Soviet personal computer Glushkov was noted for developing an early...
read Jan 17, 2025Apple halts AI news summaries for continuing to spread misinformation
Key changes and updates: Apple has temporarily suspended AI-generated notification summaries for news and entertainment applications in the latest iOS 18.3 developer beta while addressing accuracy concerns. The suspension specifically targets news and entertainment app categories, with plans to restore functionality in a future update once underlying issues are resolved New warning labels now appear in Settings, explicitly stating that "Summaries may contain errors" for apps where the feature remains active AI-generated content will be displayed in italics to clearly differentiate it from text originated by news outlets or app developers Enhanced user controls: A new set of features provides...
read Jan 16, 2025AI-powered websites are impersonating major sports news outlets
Sports media faces a surge in AI-generated content mills impersonating legitimate news outlets, with hundreds of fake sites siphoning both content and advertising revenue from established brands. The emerging threat: DoubleVerify's recent investigation uncovered over 200 websites populated with AI-generated content and plagiarized material from legitimate news sources, operating under a scheme dubbed "Synthetic Echo." These deceptive sites deliberately mimic established media brands like ESPN, NBC, Fox, CBS, and BBC, often using slight variations of their names to appear legitimate One notable example, "BBCSportss," systematically copies content from Sports Illustrated while masquerading as a BBC property The content typically consists...
read Jan 14, 2025How AI deepfakes convinced the world that the Hollywood sign burned down
The recent Los Angeles wildfires sparked widespread misinformation when AI-generated images falsely depicted the Hollywood sign burning down, highlighting the growing challenge of distinguishing real from artificial content during crisis situations. Current situation; Los Angeles County is battling devastating wildfires that have claimed ten lives, destroyed 10,000 structures, and forced over 130,000 residents to evacuate. The Pacific Palisades neighborhood has suffered extensive damage A suspected arsonist, allegedly armed with a flamethrower, was arrested in connection with the Kenneth fire Official fire incident maps from CAL FIRE confirm the Hollywood sign remains unaffected by the blazes Viral misinformation spread; AI-generated images...
read Jan 9, 2025Apple’s new AI summaries inadvertently make scam messages appear legit
Apple's new AI-powered notification system is inadvertently lending credibility to scam messages by summarizing and prioritizing them alongside legitimate communications on iPhones and Mac computers. Key developments: Apple's "Apple Intelligence" update, rolled out to Australian users in late 2023, includes features that summarize notifications and prioritize certain alerts using artificial intelligence. The system condenses multiple notifications into single messages and flags what it determines to be urgent communications This AI-powered feature is being applied to both legitimate messages and scam attempts without discrimination Apple has already faced criticism for incorrectly summarizing BBC headlines, including a notable error regarding a CEO's...
read Jan 9, 2025AI models easily absorb medical misinformation, study finds
Large language models (LLMs) can be easily compromised with medical misinformation by altering just 0.001% of their training data, according to new research from New York University. Key findings: Researchers discovered that injecting a tiny fraction of false medical information into LLM training data can significantly impact the accuracy of AI responses. Even when misinformation made up just 0.001% of training data, over 7% of the LLM's answers contained incorrect medical information The compromised models passed standard medical performance tests, making the poisoning difficult to detect For a large model like LLaMA 2, researchers estimated it would cost under $100...
read Jan 7, 2025Apple to revamp notifications after misinformation concerns
Apple is implementing changes to its Apple Intelligence notification system following complaints about AI-generated summaries spreading misinformation. The core issue: Apple Intelligence's smart notification feature has been combining multiple news stories and creating inaccurate summaries, leading to the spread of false information. The BBC filed a formal complaint after notifications carrying their logo presented false information about a high-profile criminal case One notable error claimed that Luigi Mangione, accused of killing UnitedHealthcare CEO Brian Thompson, had died by suicide when he was actually alive The feature is currently limited to devices running iOS 18.2 or macOS 15.2 Technical context: The...
read Jan 6, 2025Apple Intelligence caught spreading false headlines again
AI-powered news service Apple Intelligence has produced multiple instances of false headlines, including incorrect claims about a CEO's death, sports outcomes, and personal announcements about public figures. Recent incidents: Apple Intelligence's BBC app integration has generated several inaccurate news headlines, raising concerns about AI reliability in news distribution. The system falsely reported that murder suspect Luigi Mangione had committed suicide It incorrectly announced darts player Luke Littler's victory in the PDC World Championship before the final match A notification erroneously stated that Spanish tennis player Rafael Nadal had come out as gay, confusing him with Brazilian player Joao Lucas Reis...
read Jan 1, 2025US sanctions Iran and Russia for deepfakes, AI-powered election interference
Microsoft has identified Iranian-backed hackers manipulating Americans through AI-generated fake news sites, primarily targeting those susceptible to election boycott messaging based on candidates' Israel support. Key Details of Sanctions: The U.S. Treasury and State Departments have imposed sanctions on specific Iranian and Russian entities for attempting to interfere in the 2024 election through deepfakes and influence campaigns. The sanctions target Iran's Cognitive Design Production Center (CDPC), a unit of the Islamic Revolutionary Guard Corps Moscow-based Center for Geopolitical Expertise (CGE), affiliated with Russian intelligence (GRU), and its director Valery Mikhaylovich Korovin also face sanctions The measures prohibit U.S. persons from...
read Dec 20, 2024AI-made Lil Wayne diss track sparks online frenzy
The intersection of artificial intelligence and hip-hop culture has created new challenges for artists and fans alike, as AI-generated content begins to blur the lines between authentic and artificial music. The latest controversy: A fabricated rap feud between Lil Wayne and Kendrick Lamar has gained traction online, fueled by AI-generated diss tracks and speculation about Super Bowl performance selections. Rumors began circulating after the NFL chose Kendrick Lamar over New Orleans native Lil Wayne for the upcoming Super Bowl halftime show The situation was complicated by Lil Wayne's loyalty to Drake, who has a history of tension with Lamar AI-generated...
read Dec 19, 2024AI-generated article allegedly to blame for Steve Harvey death hoax
Breaking news confusion: A false report claiming Steve Harvey's death circulated widely on news aggregation platforms, causing unnecessary alarm and confusion among fans. The hoax originated on Trend Cast News with the headline "Steve Harvey Passed Away Today: Remembering The Legacy Of A Comedy Legend" The story gained significant traction after being shared by Newsbreak, a platform with over 50 million monthly users The fake article notably included a future publication date of December 19, 2024 Ongoing platform issues: Newsbreak's involvement in spreading the misinformation reflects a broader pattern of challenges with AI-generated content on their platform. The platform previously...
read Dec 16, 2024Instagram chief: Creator identity is crucial to content authentication in AI era
The rise of AI-generated content is prompting social media platforms to rethink how they help users distinguish between authentic and artificial content. Key statement from Instagram leadership: Instagram head Adam Mosseri emphasizes that source verification is becoming increasingly critical as AI-generated content becomes more convincing and widespread. Mosseri specifically warns users against automatically trusting images they encounter online, noting that AI is now producing highly realistic content He stresses that social platforms have a responsibility to label AI-generated content, while acknowledging that some will inevitably slip through detection systems The Instagram chief advocates for providing additional context about content creators...
read Dec 11, 2024Rapper 50 Cent shares fake AI video of Jay-Z and Diddy being arrested
There's new controversy in the hip-hop community as rapper 50 Cent uses artificial intelligence to mock fellow artists facing serious legal challenges. Latest developments: Rapper 50 Cent (Curtis Jackson) has shared an AI-generated video depicting Jay-Z and Diddy being arrested, amid serious legal allegations against both music moguls. The artificial intelligence video shows both men in tuxedos being arrested at a party and transported to jail while holding wine glasses Jackson captioned the post with a joke about potential retaliation: "I want to post this but I'm afraid I'm gonna get shot" Social media reactions were mixed, with some followers...
read Dec 11, 2024Scammers appropriate website of defunct Oregon paper to publish AI slop
The proliferation of AI-generated fake news websites is threatening local journalism, as demonstrated by scammers who hijacked the defunct Ashland Daily Tidings' digital presence to create a fraudulent news operation. The takeover scheme: A group of scammers appropriated the website of the Ashland Daily Tidings, a historic Oregon newspaper that closed in 2023 after operating since 1876, to create a deceptive news operation. The fraudulent website claimed to employ eight reporters, but investigation revealed these were either fictional personas or stolen identities Content was primarily AI-generated, consisting of plagiarized local news stories that were automatically rewritten The operation aimed to...
read Dec 4, 2024Stanford professor admits ChatGPT added false information to his court filing
The use of AI tools in legal and academic contexts faces new scrutiny after a prominent misinformation researcher acknowledged AI-generated errors in a court filing. The core incident: Stanford Social Media Lab founder Jeff Hancock admitted to using ChatGPT's GPT-4o model while preparing citations for a legal declaration, resulting in the inclusion of fabricated references. The document was filed in support of Minnesota's "Use of Deep Fake Technology to Influence an Election" law The law is currently being challenged in federal court by conservative YouTuber Christopher Khols and Minnesota state Rep. Mary Franson Attorneys for the challengers requested the document...
read Dec 1, 2024BlueSky expands moderation team 4x amid AI-powered disinformation
Social media platform BlueSky is expanding its moderation efforts and user base, marking a significant shift in the competitive landscape among Twitter-alternative platforms. Key developments: BlueSky is strategically positioning itself as a more regulated alternative to X (formerly Twitter) by significantly expanding its content moderation capabilities. The platform is quadrupling its moderation team from 25 to 100 contract workers to manage its growing user base BlueSky has reached a milestone of 22 million total users, attracting particular interest from scientists and science writers The European Union of Journalists plans to cease publishing content on X starting January 20, 2025, citing...
read Nov 24, 2024Stanford professor accused of using fake AI citations in deepfake debate
The growing prevalence of artificial intelligence in academic and legal contexts has led to another high-profile case of potentially AI-generated false citations, this time involving a Stanford professor's legal argument about election-related deepfakes. Core allegations: Stanford professor Jeff Hancock, a prominent misinformation researcher, faces accusations of using AI-hallucinated citations in his legal argument supporting Minnesota's proposed anti-deepfake election law. Multiple journalists and legal scholars have been unable to verify key studies cited in Hancock's document, including one titled "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance" The situation has raised concerns about the reliability of Hancock's entire...
read Nov 18, 2024EditProAI is a new AI tool that’s going viral — Don’t fall for its scam
The rise of AI-generated content has created new opportunities for cybercriminals to distribute malware through fake AI tool websites. Latest cyber threat alert: A fraudulent AI image and video generator website called EditProAI is being used to distribute malware that targets both Windows and macOS systems. The malicious website appears legitimate with professional menus and privacy policies but delivers harmful malware when users click the "Get Now" button Windows users receive Lumma Stealer malware through a file named "Edit-ProAI-Setup-newest_release.exe" Mac users are targeted with AMOS malware via a file called "EditProAi_v.4.36.dmg" Distribution tactics: The cybercriminals behind EditProAI are leveraging both...
read Nov 16, 2024AI likely impacted the 2024 election, though not in the ways many expected
The 2024 U.S. election marked a significant shift in how artificial intelligence influenced political discourse, with AI-generated content serving more as a tool for emotional messaging than direct disinformation. The evolving landscape of AI in politics: The 2024 election cycle witnessed widespread use of AI-generated imagery and videos, but not in the ways many experts had initially feared. Rather than creating convincing deepfakes to deceive voters, AI was primarily used to create obvious satirical content and political propaganda Notable examples included images of Donald Trump with Superman-like features and Kamala Harris in symbolic scenarios with communist imagery The technology became...
read Nov 15, 2024Gaming giant Razer pivots to business market with environmental impact AI tool
The gaming hardware giant Razer is making an unexpected move into the enterprise sustainability software market with an AI-powered environmental impact assessment tool. The big picture: Razer's new Gaiadex platform aims to revolutionize how companies evaluate and report their environmental impact through automated Life Cycle Assessments (LCAs) and Environmental Product Declarations (EPDs). GE Healthcare and Malaysian bank Maybank are the first major clients to adopt the Gaiadex platform The tool is designed to work across all industries, marking a significant departure from Razer's traditional gaming hardware focus Gaiadex promises to compress months of environmental impact analysis work into seconds using...
read Nov 14, 2024AI, voter manipulation and the future of social media
The rise of artificial intelligence and social media platforms is reshaping political campaigns and information dissemination, creating new challenges for electoral integrity and democratic processes. Current landscape: The intersection of AI, social media algorithms, and political campaigning has created unprecedented challenges in managing information flow during elections. Platform owners like X (formerly Twitter) have significant control over information prioritization and user exposure through their algorithms Wealthy individuals and tech leaders can now exert outsized influence on political discourse through platform ownership The combination of algorithmic content curation and platform control has created information environments that may favor certain political perspectives...
read Nov 12, 2024Grok criticizes creator Musk for spreading misinformation
Elon Musk's AI chatbot Grok has publicly acknowledged its creator's role in spreading misinformation, highlighting the complex relationship between AI systems and their creators' public statements. Key development: The AI chatbot Grok, developed by Musk's xAI company, has explicitly confirmed Musk's role in spreading misinformation when directly questioned about the topic. Grok specifically cited Musk's posts about elections containing misleading or false claims that have reached billions of viewers The AI identified instances of Musk sharing manipulated videos and debunked claims about voting processes A recent example involved Musk mischaracterizing a video of thieves stealing air conditioners as political violence...
read Nov 11, 2024Why AI failed to significantly impact the 2024 election
The 2024 U.S. presidential election proved more resilient to artificial intelligence disruption than many experts initially predicted, with major AI platforms implementing safeguards and potential threats being quickly identified and addressed. Initial concerns and reality: Early warnings about AI's potential to disrupt the 2024 election through misinformation and voter manipulation have largely failed to materialize. A fraudulent AI-generated robocall impersonating President Biden in New Hampshire was swiftly addressed and penalized High-profile instances of AI misinformation, including a fake Taylor Swift endorsement of Donald Trump, were quickly identified and debunked Political campaigns showed reluctance to embrace AI tools, limiting their potential...
read