YouTube gaming commentator Mark Brown is facing an increasingly common problem in the AI era: someone has stolen his voice. A channel called Game Offline Lore published videos using an AI-generated version of Brown’s distinctive British voice, creating content he never narrated or authorized. This incident highlights how AI voice cloning is enabling a new form of identity theft that goes beyond traditional content plagiarism to appropriate someone’s actual vocal identity.
The big picture: A YouTube channel is using an AI clone of gaming commentator Mark Brown’s voice without permission, representing a disturbing evolution of digital identity theft.
- The unauthorized videos feature narration that sounds exactly like Brown but covers content he never created, including explanations of games like “Doom: The Dark Ages.”
- Brown, whose Game Maker’s Toolkit channel has 1.65 million subscribers, describes the experience as “weird and invasive” and “like plagiarism, but more personal.”
Why this matters: Voice cloning represents a particularly intimate form of digital impersonation that threatens content creators’ control over their own identities.
- Unlike traditional content theft that copies work, voice cloning appropriates a “distinct part of who I am,” as Brown describes it.
- This case demonstrates how AI fraud is expanding beyond deepfake videos to include audio impersonation that can damage creators’ reputations and mislead their audiences.
Behind the numbers: AI-driven fraud has become sophisticated enough to happen in real time, making detection and prevention increasingly difficult.
- Brown’s channel features 220 videos with in-depth explanations of game design elements like puzzle mechanics in Blue Prince or UI problems in The Legend of Zelda.
- The impersonator has been actively managing the deception by removing comments that point out the voice theft.
The response: YouTube has systems in place for addressing voice theft, but enforcement appears inconsistent.
- Brown filed a privacy complaint to YouTube, which typically gives offenders 48 hours to remove content before the platform intervenes.
- Despite this policy, Brown reported that more than 48 hours had passed without action, with both infringing videos remaining live.
What they’re saying: YouTube acknowledges the problem but hasn’t yet addressed this specific case.
- YouTube spokesperson Jack Malon told WIRED that the platform expanded its privacy request policy last year “to allow users to request the removal of AI-generated or other synthetic or altered content that simulates their face or voice.”
- Malon stated the company is “reviewing the content to determine if a violation has been made” and “will take action if the content violates our policies.”
A Gaming YouTuber Says an AI-Generated Clone of His Voice Is Being Used to Narrate 'Doom' Videos