Discord users in the UK are bypassing new age verification requirements by using video game characters instead of their real faces, exploiting weaknesses in facial recognition systems. The workaround highlights significant vulnerabilities in age verification technology that could become even more problematic as AI-generated content grows more sophisticated.
What’s happening: The UK’s new child safety laws require platforms like Discord to verify users are over 18 through government ID or face scans to access age-restricted content.
- Users discovered they could pass facial verification using screenshots from video games like Death Stranding, God of War, and Cyberpunk 2077.
- One user successfully verified using Norman Reedus’s character from Death Stranding, while others used characters from games ranging from Baldur’s Gate 3 to even Garry’s Mod.
- The bypass method quickly went viral on social media, inspiring others to try similar approaches.
Why this matters: These vulnerabilities expose fundamental flaws in age verification systems just as they’re being implemented globally, with potential implications for child safety and data privacy.
- Companies like Google are rolling out AI-driven age estimation for Search and YouTube.
- Gaming platforms like Roblox are making age checks central to safety measures.
- The ease of bypassing current systems raises questions about their effectiveness in protecting children.
The technical challenge: Experts warn that current verification methods have too many exploitable loopholes that will only worsen with advancing AI technology.
- “The industry is trying to find solutions to the issue of AI deepfakes and live AIs,” says David Maimon, head of fraud insights for SentiLink, a fraud prevention company.
- Bad actors typically stay “7 to 12 months ahead” in finding vulnerabilities to bypass security technologies.
- Even photo IDs can be convincingly faked with modern printing techniques and materials.
What the users are saying: People bypassing the systems cite privacy concerns and philosophical opposition to mandatory verification.
- “Requiring people to give up facial information to access all the features of websites and apps like Discord and Bluesky is a massive overreach,” one user told WIRED.
- Users worry about data breaches, pointing to incidents like the Tea app breach that exposed thousands of women’s verification photos.
- “I don’t trust the third party services that are being used with my data, especially with how damaging data leaks can be,” another user explained.
The bigger picture: The situation illustrates a broader tension between child safety goals and practical implementation challenges.
- Critics argue these systems push young people toward less regulated corners of the internet rather than protecting them.
- Alternative approaches might need to rely more on historical data like phone numbers and addresses rather than biometric verification.
- The proliferation of real-time AI video generation technology could make current verification methods obsolete.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...