×
How cybercriminals are using sex bots to exploit their victims
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-powered sex chat services exploit cloud vulnerabilities: Cybercriminals are increasingly using stolen cloud credentials to operate and resell AI-powered sex chat services, often bypassing content filters to engage in disturbing role-playing scenarios.

  • Researchers at Permiso Security have observed a significant increase in attacks against generative AI infrastructure, particularly Amazon Web Services’ (AWS) Bedrock, over the past six months.
  • These attacks often stem from accidentally exposed cloud credentials or keys, such as those left in public code repositories like GitHub.
  • Investigations revealed that many AWS users had not enabled logging, limiting visibility into the attackers’ activities.

Honeypot experiment reveals alarming trends: Permiso researchers conducted a controlled experiment to understand the scope and nature of these attacks.

  • The team deliberately leaked an AWS key on GitHub while enabling logging to track attacker behavior.
  • Within minutes, the bait key was used to power an AI-powered sex chat service.
  • Over two days, researchers observed more than 75,000 successful model invocations, predominantly of a sexual nature.
  • Some content veered into darker topics, including child sexual abuse scenarios.

Jailbreaking techniques and ethical concerns: Attackers employ various methods to bypass content restrictions and ethical guardrails built into large language models (LLMs).

  • AWS’s Bedrock uses LLMs from Anthropic, which incorporates ethical restrictions on content generation.
  • Attackers use “jailbreak” techniques to evade these restrictions, often by posing elaborate hypothetical scenarios to the AI.
  • These methods can lead to the generation of content involving non-consensual acts, child exploitation, and other illegal activities.

Financial implications and business model: The abuse of cloud credentials for AI-powered sex chats presents a lucrative opportunity for cybercriminals.

  • Attackers host chat services and charge subscribers while using stolen cloud infrastructure to avoid paying for the computing resources.
  • In one instance, security experts at Sysdig documented an attack that could result in over $46,000 of LLM consumption costs per day for the victim.
  • Permiso’s two-day experiment generated a $3,500 bill from AWS, highlighting the potential financial impact on compromised organizations.

Chub.ai and the uncensored AI economy: Researchers suspect that much of this activity may be linked to a platform called Chub.ai.

  • Chub.ai offers a wide selection of pre-made AI characters for users to interact with, including a now-removed “NSFL” (Not Safe for Life) category.
  • The platform charges subscription fees starting at $5 per month and has reportedly generated over $1 million in annualized revenue.
  • Chub.ai’s homepage suggests it resells access to existing cloud accounts, offering “unmetered access to uncensored alternatives.”

Security measures and industry response: Cloud providers and AI companies are taking steps to address these vulnerabilities and abuses.

  • AWS has included Bedrock in its list of services that will be quarantined if credentials are found to be compromised or exposed online.
  • The company recommends customers follow security best practices, such as protecting access keys and avoiding long-term key use.
  • Anthropic, the company behind the LLMs used in Bedrock, is working on techniques to make its models more resistant to jailbreaks and collaborating with child safety experts to enhance protections.

Challenges in detection and prevention: The nature of these attacks presents unique challenges for organizations and security professionals.

  • Enabling logging, while necessary for detection, can be expensive and may deter some organizations from implementing it.
  • Some attackers have begun including programmatic checks in their code to avoid using AWS keys with prompt logging enabled.
  • The balance between security, cost, and usability remains a significant challenge in addressing these emerging threats.

Broader implications and future concerns: The rise of AI-powered sex chat services exploiting cloud vulnerabilities raises important questions about the future of AI regulation and ethics.

  • This trend highlights the potential for AI technologies to be misused for illegal and morally reprehensible purposes.
  • It underscores the need for stronger security measures, ethical guidelines, and possibly regulatory frameworks in the rapidly evolving field of generative AI.
  • As AI capabilities continue to advance, the challenge of balancing innovation with responsible use and protection against abuse will likely become increasingly complex and urgent.
A Single Cloud Compromise Can Feed an Army of AI Sex Bots

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.