×
Gaming YouTuber claims AI voice cloning has taken the words right out of his mouth
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

YouTube gaming commentator Mark Brown is facing an increasingly common problem in the AI era: someone has stolen his voice. A channel called Game Offline Lore published videos using an AI-generated version of Brown’s distinctive British voice, creating content he never narrated or authorized. This incident highlights how AI voice cloning is enabling a new form of identity theft that goes beyond traditional content plagiarism to appropriate someone’s actual vocal identity.

The big picture: A YouTube channel is using an AI clone of gaming commentator Mark Brown’s voice without permission, representing a disturbing evolution of digital identity theft.

  • The unauthorized videos feature narration that sounds exactly like Brown but covers content he never created, including explanations of games like “Doom: The Dark Ages.”
  • Brown, whose Game Maker’s Toolkit channel has 1.65 million subscribers, describes the experience as “weird and invasive” and “like plagiarism, but more personal.”

Why this matters: Voice cloning represents a particularly intimate form of digital impersonation that threatens content creators’ control over their own identities.

  • Unlike traditional content theft that copies work, voice cloning appropriates a “distinct part of who I am,” as Brown describes it.
  • This case demonstrates how AI fraud is expanding beyond deepfake videos to include audio impersonation that can damage creators’ reputations and mislead their audiences.

Behind the numbers: AI-driven fraud has become sophisticated enough to happen in real time, making detection and prevention increasingly difficult.

  • Brown’s channel features 220 videos with in-depth explanations of game design elements like puzzle mechanics in Blue Prince or UI problems in The Legend of Zelda.
  • The impersonator has been actively managing the deception by removing comments that point out the voice theft.

The response: YouTube has systems in place for addressing voice theft, but enforcement appears inconsistent.

  • Brown filed a privacy complaint to YouTube, which typically gives offenders 48 hours to remove content before the platform intervenes.
  • Despite this policy, Brown reported that more than 48 hours had passed without action, with both infringing videos remaining live.

What they’re saying: YouTube acknowledges the problem but hasn’t yet addressed this specific case.

  • YouTube spokesperson Jack Malon told WIRED that the platform expanded its privacy request policy last year “to allow users to request the removal of AI-generated or other synthetic or altered content that simulates their face or voice.”
  • Malon stated the company is “reviewing the content to determine if a violation has been made” and “will take action if the content violates our policies.”
A Gaming YouTuber Says an AI-Generated Clone of His Voice Is Being Used to Narrate 'Doom' Videos

Recent News

Closing the blinds: Signal rejects Windows 11’s screenshot recall feature

Signal prevents Microsoft's Windows 11 Recall feature from capturing sensitive conversations through automatic screen security measures that block AI-powered surveillance of private messaging.

AI safety techniques struggle against diffusion models

Current safety monitoring techniques may be ineffective for inspecting diffusion models like Gemini due to their inherently noisy intermediate states.

AI both aids and threatens creative freelancers as content generation becomes DIY

As generative AI enhances creative workflows, it simultaneously decimates income opportunities for freelance creators like illustrators who are seeing commissions drop by over 50%.