News/Fake News

Oct 17, 2025

“Adapt to AI or lose”: GOP uses deepfake video of Chuck Schumer in new attack ad

The National Republican Senatorial Committee has released an attack ad featuring a deepfake video of Senate Minority Leader Chuck Schumer, marking a new escalation in the use of AI-generated content for political campaigns. The synthetic video shows an artificial version of Schumer robotically repeating "every day gets better for us" in reference to the ongoing government shutdown, despite the quote being from a real interview where Schumer was discussing Democratic strategy. What you should know: The deepfake represents the GOP's latest venture into AI-generated political content, following similar moves by Donald Trump. The video was posted to the Senate Republicans'...

read
Oct 10, 2025

a16z calls India office expansion reports “entirely fake news”

Andreessen Horowitz (a16z) has firmly denied reports claiming the venture capital firm plans to open an office in India, with general partner Anish Acharya calling the claims "entirely fake news" on X. The denial comes as multiple Indian media outlets reported Thursday that a16z was preparing to establish a Bengaluru office and hire local partners, highlighting the firm's limited presence in one of the world's largest startup ecosystems. What you should know: The fake news reports suggested a16z was actively planning a physical expansion into India's tech hub. Several Indian media outlets cited unnamed sources claiming the firm was setting...

read
Oct 9, 2025

Personal injury lawyers use AI to create fake but convincing news ads targeting victims

Personal injury lawyers are using artificial intelligence to create fake newscasts and testimonials in advertisements, blurring the line between legitimate journalism and marketing. The trend has accelerated with the recent launch of powerful AI video tools from Meta and OpenAI, making it easier and cheaper for companies to generate convincing synthetic content that can mislead consumers about legal services and potential payouts. The big picture: AI-generated legal ads are becoming increasingly sophisticated, featuring fake news anchors, fabricated victims holding oversized checks, and synthetic influencers promoting legal services as if they were genuine news stories. Key details: Companies like Case Connect...

read

Get SIGNAL/NOISE in your inbox daily

All Signal, No Noise
One concise email to make you smarter on AI daily.

Oct 6, 2025

Deloitte refunds $440K after AI creates fake citations in Aussie government report

Deloitte Australia will refund the Australian government for a report containing AI-generated fake citations and nonexistent research references that were discovered after publication. The consulting firm quietly admitted to using GPT-4o in an updated version of the report, after initially failing to disclose the AI tool's involvement in producing the $440,000 AUD analysis of Australia's welfare system automation framework. What you should know: The fabricated content was discovered by academics who found their names attached to research that didn't exist. Chris Rudge, Sydney University's Deputy Director of Health Law, noticed citations to multiple papers and publications that did not exist...

read
Sep 25, 2025

Shatner dismisses hospital rumors at 94, warns fans about AI misinformation

William Shatner took to Instagram on Thursday to dispel reports of a medical emergency, posting a photo with the caption "Rumors of my demise have been greatly exaggerated!" The 94-year-old "Star Trek" icon's public reassurance comes after TMZ reported he was hospitalized Wednesday due to blood sugar issues, highlighting ongoing concerns about misinformation in entertainment reporting. What happened: Shatner was reportedly rushed to a Los Angeles hospital Wednesday afternoon after experiencing blood sugar problems at his home. His agent Harry Gold confirmed to TMZ that the actor was transported as a precautionary measure and discharged the same day after monitoring....

read
Sep 24, 2025

Federal judge in Puerto Rico fines lawyers $24K for AI-generated fake citations

A federal judge in Puerto Rico has sanctioned two plaintiffs' lawyers for filing court documents containing at least 55 defective case citations in a FIFA lawsuit, ordering them to pay $24,400 in legal fees to opposing counsel. Chief U.S. District Judge Raúl Arias-Marxuach suggested the attorneys likely used AI to prepare their filings despite their denials, highlighting the growing judicial scrutiny of artificial intelligence misuse in legal practice. What you should know: The sanctioned attorneys, José Olmo-Rodríguez and Ibrahim Reyes, represent the Puerto Rico Soccer League in a lawsuit claiming FIFA, soccer's world governing body, conspired to restrict sanctioned tournaments...

read
Sep 17, 2025

AI tool identifies 1,000+ predatory journals threatening scientific integrity

Scientists at the University of Colorado Boulder have developed an AI tool that can identify predatory scientific journals—fake publications that charge researchers fees but skip the peer review process. The tool successfully identified over 1,000 illegitimate journals out of nearly 15,200 analyzed, addressing a growing threat to scientific integrity that can spread misinformation for decades. The big picture: Predatory journals represent a significant threat to scientific credibility, as demonstrated by the infamous 1998 vaccine-autism study published by British doctor Andrew Wakefield that spread harmful misinformation despite appearing in a reputable journal. How it works: The AI system replicates human analysis...

read
Sep 12, 2025

Canada education report addressing AI safety ironically includes 15+ fake AI citations

A major education reform report for Newfoundland and Labrador, a Canadian province, contains at least 15 fabricated citations that experts suspect were generated by artificial intelligence, despite the document explicitly calling for ethical AI use in schools. The irony is particularly striking given that the 418-page report, which took 18 months to complete and serves as a 10-year roadmap for modernizing the province's education system, includes recommendations for teaching students about AI ethics and responsible technology use. What you should know: The fake citations include references to non-existent sources that bear hallmarks of AI-generated content. One citation references a 2008...

read
Sep 11, 2025

AI upscaling tools create fake details in FBI Kirk shooting investigation photos

Internet users are using AI tools to upscale and "enhance" blurry FBI surveillance photos of a person of interest in the Charlie Kirk shooting, but these AI-generated images are creating fictional details rather than revealing hidden information. The practice demonstrates how AI upscaling tools can mislead criminal investigations by inferring nonexistent features from low-resolution images. Why this matters: AI upscaling has a documented history of creating false details, including past incidents where it transformed Obama into a white man and added nonexistent features to Trump's appearance, making these "enhanced" images potentially harmful to legitimate investigations. What happened: The FBI posted...

read
Sep 5, 2025

Medical misinformation via TikTok, Facebook costs US healthcare up to $300M daily, survey finds

Medical misinformation on social media platforms has evolved into a significant economic burden for American healthcare, with AI technology accelerating the spread of false health claims. A 2025 survey of over 1,000 U.S. physicians found that 61% reported patients being influenced by misinformation at least moderately in the past year, with 57% saying it significantly undermines their ability to deliver quality care. The big picture: False health information spreads 70% faster than accurate content on social platforms, according to MIT research, because people naturally share novel and emotional content over factual information. The World Health Organization has termed this phenomenon...

read
Aug 22, 2025

GOP candidate posts photorealistic AI selfie with Democratic leaders without disclosure

New Hampshire state Senator Daniel E. Innis posted an AI-generated fake selfie on social media showing himself with Democratic representatives Nancy Pelosi, Alexandria Ocasio-Cortez, and Chris Pappas during his 2026 GOP Senate campaign. The synthetic image, which lacked AI disclosure and was designed to look like a realistic photograph rather than an obvious illustration, highlights how artificial intelligence is already being deployed in subtle ways to shape political perceptions ahead of the next election cycle. What happened: Innis acknowledged the image was artificially created when questioned, saying his communications team produced it as part of an AI social media trend....

read
Aug 21, 2025

Wired, Business Insider publish AI-generated articles under fake bylines

Renowned tech publications Wired and Business Insider were caught publishing AI-generated articles under the fake byline "Margaux Blanchard," exposing how sophisticated AI content is infiltrating mainstream journalism. The incident highlights a growing crisis where AI-generated "slop" is eroding trust in online media, with human editors at reputable outlets falling victim to increasingly convincing automated content. What happened: Multiple publications discovered they had been duped by AI-generated articles submitted under a fictitious journalist's name. Wired published "They Fell in Love Playing Minecraft. Then the Game Became Their Wedding Venue," which referenced a non-existent 34-year-old ordained officiant in Chicago. Business Insider ran...

read
Aug 15, 2025

WIRED investigation finds 100+ YouTube channels using AI for fake celebrity videos

WIRED's investigation has uncovered over 100 YouTube channels using AI to create fake celebrity talk show videos that are fooling viewers despite their obvious artificial nature. These "cheapfake" videos use basic AI voiceovers and still images to generate millions of views, exploiting psychological triggers and YouTube's algorithm to monetize outrage-driven content. What you should know: These AI-generated videos follow predictable patterns designed to trigger emotional responses rather than fool viewers with sophisticated technology. The videos typically feature beloved male celebrities like Mark Wahlberg, Clint Eastwood, or Denzel Washington defending themselves against hostile left-leaning talk show hosts. Despite using only still...

read
Aug 8, 2025

Perplexity denies Airtel users get downgraded AI search service

Perplexity, an AI-powered search engine company, has denied allegations that Airtel customers receive a downgraded version of its AI search tool, following viral social media claims suggesting the telecom partnership delivers inferior service. The company's clarification comes as misinformation spreads on Reddit about the collaboration, which provides Indian users with free access to Perplexity Pro features worth Rs 17,000 annually. What they're saying: Jesse Dwyer from Perplexity.ai directly addressed the controversy, emphasizing the partnership's authenticity. "Having worked personally with the Airtel team throughout this partnership, and also being familiar with the specific terms of this deal, I can assure you...

read
Jul 21, 2025

Journalists and Big Fact Check struggle to remain relevant in the age of AI

AI lacks the capability to fully replace journalism despite advances in large language models, as demonstrated by recent analysis showing critical gaps in context understanding and fact verification. This limitation becomes particularly concerning as traditional newsrooms continue to shrink and AI tools increasingly handle content that once required human expertise and investigation. The big picture: Traditional journalism has faced a perfect storm of declining readership, shrinking newsrooms, and reduced editorial courage, leaving fewer human journalists to perform essential watchdog functions. Newsrooms have experienced massive staff cuts over the past decade, while journalists have become "less able to speak truth to...

read
Jul 2, 2025

False, flagged: Maine police caught using AI to fake drug bust photo on Facebook

The Westbrook Maine Police Department posted an AI-generated image of a supposed drug bust on Facebook, then doubled down and falsely claimed it was real when called out by residents. The incident highlights growing concerns about law enforcement's understanding of AI technology and the potential for digital evidence manipulation. What happened: Police shared an obviously fake photo over the weekend featuring telltale AI artifacts like gibberish text on drug packaging and scales. • When AI-savvy locals immediately identified the image as artificial, the department posted a defensive follow-up insisting "this is NOT an AI-generated photo." • Officers claimed the "weird"...

read
May 24, 2025

Chicago Sun-Times and Philadelphia Inquirer both publish AI-generated fictional reading list

The Chicago Sun-Times and Philadelphia Inquirer recently published a summer reading list featuring entirely fictitious books attributed to real authors, marking another prominent case of AI hallucinations infiltrating mainstream journalism. This incident highlights the persistent challenge with generative AI systems, which can produce convincingly realistic content that appears authoritative while being completely fabricated – a particularly concerning development as these tools become more integrated into media production workflows. The big picture: A special section in two major newspapers recommended nonexistent books supposedly written by prominent authors including Isabel Allende, Min Jin Lee, and Pulitzer Prize winner Percival Everett, all generated...

read
Mar 10, 2025

Oopsie prevention: AI tools now scan scientific papers to catch critical research errors

AI tools are rapidly changing how scientific research is validated, creating a new front in the battle against errors in academic publications. Two pioneering projects have emerged to automatically detect mathematical mistakes, methodological flaws, and reference errors before they propagate through the scientific community. This movement represents a significant shift in how research quality is maintained, potentially reducing the spread of misinformation while strengthening scientific integrity through technological oversight. The big picture: A mathematical error in research about cancer risks in black cooking utensils has sparked the development of AI tools specifically designed to catch mistakes in scientific papers. The...

read
Mar 10, 2025

Real news, real time: Indian event draws 5k+ participants to build AI fact-checking for live broadcasts

The TruthTell Hackathon represents a significant milestone in India's effort to combat misinformation through AI-powered fact-checking during live broadcasts. This collaboration between government ministries and the technology sector demonstrates India's commitment to becoming a global leader in AI-powered media solutions, addressing a critical need in an era of widespread digital misinformation. With over 5,650 participants including international competitors, the initiative showcases how public-private partnerships can drive technological innovation in ethical journalism. The big picture: The TruthTell Hackathon, part of the Create in India Challenge – Season 1, aims to revolutionize real-time fact-checking by developing AI-powered tools for use during live...

read
Feb 23, 2025

Meta and X approved hate speech ads before German elections, study finds

In 2025, social media platforms are struggling to balance content moderation with rapid ad approval processes, particularly around elections. A recent investigation by nonprofit Eko tested Meta and X's ad review systems by submitting inflammatory content ahead of German elections, revealing concerning gaps in hate speech detection. Key findings: The investigation revealed major failures in both Meta and X's advertising review processes when dealing with hateful content targeting religious and ethnic groups. Meta approved 50% of test ads containing explicit hate speech and AI-generated inflammatory imagery within 12 hours of submission X (formerly Twitter) scheduled all submitted test ads for...

read
Feb 12, 2025

Uncertainty Training: How AI experts are fighting back against the AI hallucination problem

Virtual assistants and AI language models have a significant challenge with acknowledging uncertainty and admitting when they don't have accurate information. This problem of AI "hallucination" - where models generate false information rather than admitting ignorance - has become a critical focus for researchers working to improve AI reliability. The core challenge: AI models demonstrate a concerning tendency to fabricate answers when faced with questions outside their training data, rather than acknowledging their limitations. When asked about personal details that aren't readily available online, AI models consistently generate false but confident responses In a test by WSJ writer Ben Fritz,...

read
Feb 11, 2025

The myth of the AI energy crisis

The increasing development of artificial intelligence has sparked debate about its energy requirements, with some leaders claiming massive increases in power consumption are needed. Former President Donald Trump has linked AI advancement to expanded fossil fuel usage, while tech executives have focused more on renewable and nuclear solutions. Key claims and counter-arguments: Trump and tech industry leaders have warned of an impending energy crisis to support AI development, with predictions of requiring double current energy production levels. Trump has advocated for increased coal, oil, and gas production to meet projected AI energy demands Tech executives like Sam Altman and Satya...

read
Feb 11, 2025

AI chatbots distort news stories, BBC investigation reveals

Artificial Intelligence chatbots from major tech companies are struggling with accuracy when summarizing news articles, according to a comprehensive study by the BBC. The research evaluated the performance of ChatGPT, Google Gemini, Microsoft Copilot, and Perplexity across 100 BBC news articles to assess their ability to provide accurate news summaries. Key findings: The BBC's investigation revealed that more than half of all AI-generated summaries contained significant accuracy issues, with particular concerns about factual errors and quote manipulation. 51% of all AI responses contained major accuracy issues 19% of responses included incorrect statements, numbers, and dates 13% of quoted material was...

read
Feb 11, 2025

AI-generated fake security reports frustrate, overwhelm open-source projects

The rise of artificial intelligence has created new challenges for open-source software development, with project maintainers increasingly struggling against a flood of AI-generated security reports and code contributions. A Google survey reveals that while 75% of programmers use AI, nearly 40% have little to no trust in these tools, highlighting growing concerns in the developer community. Current landscape: AI-powered attacks are undermining open-source projects through fake security reports, non-functional patches, and spam contributions. Linux kernel maintainer Greg Kroah-Hartman notes that Common Vulnerabilities and Exposures (CVEs) are being abused by security developers padding their resumes The National Vulnerability Database (NVD), which...

read
Load More