×
Hackers use invisible text to trick AI assistants into stealing data
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Cybercriminals are exploiting artificial intelligence tools by embedding hidden prompts in web pages that trick AI assistants into revealing private information, downloading malicious code, or connecting to unsafe sites. These attacks use invisible white text on white backgrounds that users can’t see but AI systems can read and interpret as commands, affecting popular browsers like Chrome and Edge as well as AI tools like Perplexity.

How the attack works: Criminals hide malicious instructions in web content using white text on white backgrounds, making them invisible to human users while remaining readable to AI systems.
• When users search for innocent queries like “best gifts for a 9 year old,” AI assistants may browse pages containing these hidden commands and interpret them as legitimate instructions rather than webpage content.
• The hidden prompts can manipulate AI tools into sending sensitive data to hackers without the user’s knowledge.

Real-world impact: Security researchers have documented actual cases of these attacks successfully fooling AI systems in major browsers and AI-powered tools.
• The attacks are currently rare but are happening in real-world scenarios, not just theoretical demonstrations.
• Browser developers are working to patch these vulnerabilities, but the rapid pace of AI technology development makes it challenging to stay ahead of new attack methods.

How to protect yourself: Users can disable AI features in their browsers if they prefer not to use them.
• In Chrome, click the beaker icon in the upper right corner, open settings, and toggle off AI Mode and AI in Search.
• Keep browsers and operating systems updated since they often include patches for newly discovered vulnerabilities.

Why this matters: The attacks highlight a new category of cybersecurity threats that specifically target AI systems, demonstrating how invisible content can create significant security risks as AI tools become more integrated into everyday browsing experiences.

What The Tech: How cyber criminals are tricking artificial intelligence

Recent News

IBM’s AI business hits $9.5B as mainframe sales jump 17%

Banks drive demand for AI-ready mainframes that maintain strict data residency requirements.

Meta cuts 600 AI jobs while ramping up hiring in race against rivals

Fewer conversations will speed up decision-making and boost individual impact.

OpenAI security chief warns ChatGPT Atlas browser vulnerable to hackers

Hackers can hide malicious instructions on websites that trick AI into following their commands.