YouTube gaming commentator Mark Brown is facing an increasingly common problem in the AI era: someone has stolen his voice. A channel called Game Offline Lore published videos using an AI-generated version of Brown’s distinctive British voice, creating content he never narrated or authorized. This incident highlights how AI voice cloning is enabling a new form of identity theft that goes beyond traditional content plagiarism to appropriate someone’s actual vocal identity.
The big picture: A YouTube channel is using an AI clone of gaming commentator Mark Brown’s voice without permission, representing a disturbing evolution of digital identity theft.
- The unauthorized videos feature narration that sounds exactly like Brown but covers content he never created, including explanations of games like “Doom: The Dark Ages.”
- Brown, whose Game Maker’s Toolkit channel has 1.65 million subscribers, describes the experience as “weird and invasive” and “like plagiarism, but more personal.”
Why this matters: Voice cloning represents a particularly intimate form of digital impersonation that threatens content creators’ control over their own identities.
- Unlike traditional content theft that copies work, voice cloning appropriates a “distinct part of who I am,” as Brown describes it.
- This case demonstrates how AI fraud is expanding beyond deepfake videos to include audio impersonation that can damage creators’ reputations and mislead their audiences.
Behind the numbers: AI-driven fraud has become sophisticated enough to happen in real time, making detection and prevention increasingly difficult.
- Brown’s channel features 220 videos with in-depth explanations of game design elements like puzzle mechanics in Blue Prince or UI problems in The Legend of Zelda.
- The impersonator has been actively managing the deception by removing comments that point out the voice theft.
The response: YouTube has systems in place for addressing voice theft, but enforcement appears inconsistent.
- Brown filed a privacy complaint to YouTube, which typically gives offenders 48 hours to remove content before the platform intervenes.
- Despite this policy, Brown reported that more than 48 hours had passed without action, with both infringing videos remaining live.
What they’re saying: YouTube acknowledges the problem but hasn’t yet addressed this specific case.
- YouTube spokesperson Jack Malon told WIRED that the platform expanded its privacy request policy last year “to allow users to request the removal of AI-generated or other synthetic or altered content that simulates their face or voice.”
- Malon stated the company is “reviewing the content to determine if a violation has been made” and “will take action if the content violates our policies.”
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...