Cybercriminals are exploiting artificial intelligence tools by embedding hidden prompts in web pages that trick AI assistants into revealing private information, downloading malicious code, or connecting to unsafe sites. These attacks use invisible white text on white backgrounds that users can’t see but AI systems can read and interpret as commands, affecting popular browsers like Chrome and Edge as well as AI tools like Perplexity.
How the attack works: Criminals hide malicious instructions in web content using white text on white backgrounds, making them invisible to human users while remaining readable to AI systems.
• When users search for innocent queries like “best gifts for a 9 year old,” AI assistants may browse pages containing these hidden commands and interpret them as legitimate instructions rather than webpage content.
• The hidden prompts can manipulate AI tools into sending sensitive data to hackers without the user’s knowledge.
Real-world impact: Security researchers have documented actual cases of these attacks successfully fooling AI systems in major browsers and AI-powered tools.
• The attacks are currently rare but are happening in real-world scenarios, not just theoretical demonstrations.
• Browser developers are working to patch these vulnerabilities, but the rapid pace of AI technology development makes it challenging to stay ahead of new attack methods.
How to protect yourself: Users can disable AI features in their browsers if they prefer not to use them.
• In Chrome, click the beaker icon in the upper right corner, open settings, and toggle off AI Mode and AI in Search.
• Keep browsers and operating systems updated since they often include patches for newly discovered vulnerabilities.
Why this matters: The attacks highlight a new category of cybersecurity threats that specifically target AI systems, demonstrating how invisible content can create significant security risks as AI tools become more integrated into everyday browsing experiences.