The rise of AI-powered ‘nudify’ bots on Telegram: A disturbing trend has emerged on the messaging platform Telegram, where millions of users are accessing bots that claim to create explicit deepfake photos or videos of individuals without their consent.
- A WIRED investigation uncovered at least 50 Telegram bots advertising the ability to generate nude or sexually explicit images of people using AI technology.
- These bots collectively boast over 4 million monthly users according to Telegram’s own statistics, with two bots claiming more than 400,000 monthly users each and 14 others exceeding 100,000.
- At least 25 associated Telegram channels were identified, with a combined membership of over 3 million users.
Functionality and access: The bots offer a range of services, primarily targeting women and girls, with varying levels of explicitness and claimed capabilities.
- Many bots advertise the ability to “remove clothes” from existing images or create sexual content featuring specific individuals.
- Users typically need to purchase “tokens” to generate images, creating a financial incentive for bot operators.
- Some bots claim to offer the ability to “train” AI models on images of specific individuals, potentially allowing for more personalized and realistic deepfakes.
Telegram’s response and platform policies: When confronted with the findings of the investigation, Telegram took action to remove the identified bots and channels.
- After being contacted by WIRED, Telegram deleted the 75 bots and channels highlighted in the report.
- However, the platform’s terms of service are less detailed than those of other social media platforms regarding the prohibition of this type of content.
- Telegram has faced criticism in the past for hosting harmful content, raising questions about its content moderation practices.
The human impact: Experts warn that these AI-powered tools are causing significant harm and creating a “nightmarish scenario” for victims, particularly women and girls.
- The non-consensual creation and distribution of explicit deepfakes can have devastating personal and professional consequences for those targeted.
- The ease of access to these tools on a popular messaging platform like Telegram amplifies the potential for abuse and harassment.
- The psychological impact on victims can be severe, leading to anxiety, depression, and a loss of trust in digital spaces.
Unique vulnerabilities of Telegram: The platform’s features make it particularly susceptible to hosting and spreading deepfake abuse content.
- Telegram’s robust search functionality makes it easy for users to find these bots and channels.
- The platform’s bot hosting capabilities allow creators to easily deploy and manage these tools.
- Telegram’s sharing features facilitate the rapid spread of generated content among users.
Legal and ethical implications: The proliferation of these AI ‘nudify’ bots raises serious questions about consent, privacy, and the regulation of AI-generated content.
- Many jurisdictions lack specific laws addressing the creation and distribution of non-consensual deepfakes, creating legal gray areas.
- The rapid advancement of AI technology outpaces current regulatory frameworks, making it challenging for lawmakers to address these issues effectively.
- The ethical use of AI in image and video manipulation becomes increasingly important as these tools become more sophisticated and accessible.
Broader context of deepfake technology: While this investigation focuses on Telegram, the issue of non-consensual deepfakes extends beyond any single platform.
- Similar tools and communities exist across various online spaces, including dedicated websites and other social media platforms.
- The technology behind these bots is becoming increasingly sophisticated, making detection and prevention more challenging.
- The potential for misuse extends beyond personal harassment to include political disinformation and corporate sabotage.
The road ahead: Addressing the challenges posed by AI ‘nudify’ bots will require a multifaceted approach involving technology companies, legislators, and society at large.
- Platforms like Telegram may need to implement more robust content moderation policies and proactive detection measures.
- Lawmakers and regulators must work to create comprehensive legal frameworks that address the unique challenges posed by AI-generated content.
- Education and awareness campaigns can help users understand the risks and ethical implications of using or spreading deepfake content.
Analyzing deeper: The prevalence of these AI ‘nudify’ bots on Telegram highlights the complex intersection of technological advancement, online privacy, and societal norms. As AI continues to evolve, the potential for both beneficial and harmful applications grows exponentially. This situation serves as a stark reminder of the urgent need for ethical guidelines, robust legal frameworks, and responsible platform governance in the rapidly changing landscape of artificial intelligence and digital communication.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...