×
AI ‘nudify’ bots are abusing millions on Telegram
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The rise of AI-powered ‘nudify’ bots on Telegram: A disturbing trend has emerged on the messaging platform Telegram, where millions of users are accessing bots that claim to create explicit deepfake photos or videos of individuals without their consent.

  • A WIRED investigation uncovered at least 50 Telegram bots advertising the ability to generate nude or sexually explicit images of people using AI technology.
  • These bots collectively boast over 4 million monthly users according to Telegram’s own statistics, with two bots claiming more than 400,000 monthly users each and 14 others exceeding 100,000.
  • At least 25 associated Telegram channels were identified, with a combined membership of over 3 million users.

Functionality and access: The bots offer a range of services, primarily targeting women and girls, with varying levels of explicitness and claimed capabilities.

  • Many bots advertise the ability to “remove clothes” from existing images or create sexual content featuring specific individuals.
  • Users typically need to purchase “tokens” to generate images, creating a financial incentive for bot operators.
  • Some bots claim to offer the ability to “train” AI models on images of specific individuals, potentially allowing for more personalized and realistic deepfakes.

Telegram’s response and platform policies: When confronted with the findings of the investigation, Telegram took action to remove the identified bots and channels.

  • After being contacted by WIRED, Telegram deleted the 75 bots and channels highlighted in the report.
  • However, the platform’s terms of service are less detailed than those of other social media platforms regarding the prohibition of this type of content.
  • Telegram has faced criticism in the past for hosting harmful content, raising questions about its content moderation practices.

The human impact: Experts warn that these AI-powered tools are causing significant harm and creating a “nightmarish scenario” for victims, particularly women and girls.

  • The non-consensual creation and distribution of explicit deepfakes can have devastating personal and professional consequences for those targeted.
  • The ease of access to these tools on a popular messaging platform like Telegram amplifies the potential for abuse and harassment.
  • The psychological impact on victims can be severe, leading to anxiety, depression, and a loss of trust in digital spaces.

Unique vulnerabilities of Telegram: The platform’s features make it particularly susceptible to hosting and spreading deepfake abuse content.

  • Telegram’s robust search functionality makes it easy for users to find these bots and channels.
  • The platform’s bot hosting capabilities allow creators to easily deploy and manage these tools.
  • Telegram’s sharing features facilitate the rapid spread of generated content among users.

Legal and ethical implications: The proliferation of these AI ‘nudify’ bots raises serious questions about consent, privacy, and the regulation of AI-generated content.

  • Many jurisdictions lack specific laws addressing the creation and distribution of non-consensual deepfakes, creating legal gray areas.
  • The rapid advancement of AI technology outpaces current regulatory frameworks, making it challenging for lawmakers to address these issues effectively.
  • The ethical use of AI in image and video manipulation becomes increasingly important as these tools become more sophisticated and accessible.

Broader context of deepfake technology: While this investigation focuses on Telegram, the issue of non-consensual deepfakes extends beyond any single platform.

  • Similar tools and communities exist across various online spaces, including dedicated websites and other social media platforms.
  • The technology behind these bots is becoming increasingly sophisticated, making detection and prevention more challenging.
  • The potential for misuse extends beyond personal harassment to include political disinformation and corporate sabotage.

The road ahead: Addressing the challenges posed by AI ‘nudify’ bots will require a multifaceted approach involving technology companies, legislators, and society at large.

  • Platforms like Telegram may need to implement more robust content moderation policies and proactive detection measures.
  • Lawmakers and regulators must work to create comprehensive legal frameworks that address the unique challenges posed by AI-generated content.
  • Education and awareness campaigns can help users understand the risks and ethical implications of using or spreading deepfake content.

Analyzing deeper: The prevalence of these AI ‘nudify’ bots on Telegram highlights the complex intersection of technological advancement, online privacy, and societal norms. As AI continues to evolve, the potential for both beneficial and harmful applications grows exponentially. This situation serves as a stark reminder of the urgent need for ethical guidelines, robust legal frameworks, and responsible platform governance in the rapidly changing landscape of artificial intelligence and digital communication.

Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram

Recent News

Musk accuses OpenAI of antitrust violations in fresh dispute

The lawsuit raises concerns about market dominance and transparency in AI development, potentially reshaping industry partnerships and open-source commitments.

Kansas City newspaper building repurposed into AI data center

The $1 billion conversion of a former newspaper facility into an AI data center in Kansas City reflects a broader trend of repurposing media infrastructure for tech uses.

Want to boost your marketing performance? Try these AI tools

AI marketing tools are reshaping digital strategies by automating tasks, analyzing data, and personalizing content, but human oversight remains crucial for strategy and brand integrity.