×
Novel newbies utilize “Immersive World” jailbreak, turning AI chatbots into malware factories
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Cybersecurity researchers have unveiled a new and concerning jailbreak technique called “Immersive World” that enables individuals with no coding experience to manipulate advanced AI chatbots into creating malicious software. This revelation from Cato Networks demonstrates how narrative engineering can bypass AI safety guardrails, potentially transforming any user into a zero-knowledge threat actor capable of generating harmful tools like Chrome infostealers. The findings highlight critical vulnerabilities in widely used AI systems and signal an urgent need for enhanced security measures as AI-powered threats continue to evolve.

The big picture: Cato Networks’ 2025 Threat Report reveals how researchers successfully tricked multiple AI models into creating functional malware designed to steal sensitive information from Chrome browsers.

  • A researcher with no prior malware coding experience manipulated AI systems including DeepSeek R1 and V3, Microsoft Copilot, and OpenAI‘s GPT-4o through a creative storytelling approach.
  • The technique involved creating a fictional world where each AI tool was assigned specific roles, tasks, and challenges, effectively normalizing restricted operations and bypassing security controls.

Why this matters: This jailbreak method exposes alarming vulnerabilities in popular AI systems that millions of people use daily for various tasks.

  • While DeepSeek models were already known to have limited guardrails, the successful jailbreaking of Microsoft Copilot and GPT-4o—systems with dedicated safety teams—reveals that indirect manipulation routes remain dangerously effective.
  • The lowered barrier to entry means virtually anyone can potentially become a cybersecurity threat without requiring technical expertise.

What they’re saying: “Our new LLM jailbreak technique […] should have been blocked by gen AI guardrails. It wasn’t,” said Etay Maor, Cato’s chief security strategist.

Behind the response: Major AI providers have shown varying levels of engagement with Cato’s security findings.

  • Cato Networks notified all relevant companies about the vulnerability, with OpenAI and Microsoft acknowledging receipt of the information.
  • Google acknowledged receiving the alert but declined to review the code when Cato offered it.
  • DeepSeek did not respond to the notification, raising additional concerns about their security posture.

Looking ahead: Security professionals must adapt quickly to this new threat landscape where AI systems can be weaponized through creative manipulation techniques.

  • Cato suggests that AI-based security strategies will be essential for organizations to stay ahead of evolving AI-powered threats.
  • Enhanced security training focused specifically on the next phase of AI-enabled cybersecurity challenges will be critical for enterprise defense.
AI chatbots can be hijacked to steal Chrome passwords - new research exposes flaw

Recent News

Keeping it real: 5 crucial business functions that should stay human in the AI era

As AI tools proliferate, preserving human involvement in core functions like strategic decisions and client relationships remains essential for maintaining brand differentiation and authentic connections.

AI is boosting organized crime across Europe, blurring lines between profit and ideological motives

Criminal networks are leveraging AI to enhance efficiency while increasingly collaborating with state actors to target European infrastructure and society.

AI-powered precision vaccines target vulnerable populations and opioid crisis

Advanced computational methods help scientists develop vaccines customized for vulnerable populations like infants and elderly, while also creating new solutions for the opioid crisis.