back
Get SIGNAL/NOISE in your inbox daily

Cybersecurity researchers at Black Hat USA 2025, the world’s premier information security conference, delivered a sobering message: artificial intelligence systems are repeating the same fundamental security mistakes that plagued the internet in the 1990s. The rush to deploy AI across business operations has created a dangerous blind spot where decades of hard-learned cybersecurity lessons are being forgotten.

“AI agents are like a toddler. You have to follow them around and make sure they don’t do dumb things,” said Wendy Nather, senior research initiatives director at 1Password, a leading password management company. “We’re also getting a whole new crop of people coming in and making the same dumb mistakes we made years ago.”

The implications extend far beyond the tech industry. As companies integrate AI into customer service, code development, and data analysis, they’re unknowingly opening doors to sophisticated attacks that can steal sensitive information, manipulate business processes, and compromise entire systems—often without anyone realizing a breach has occurred.

The core problem: AI can’t tell instructions from data

The fundamental vulnerability plaguing most AI systems mirrors a classic web security flaw called SQL injection, where attackers manipulate database queries by inserting malicious code. In AI systems, this translates to “prompt injection”—feeding malicious instructions to an AI system disguised as normal data or conversation.

Rebecca Lynch, an offensive security researcher at Nvidia, the AI chip giant, explained the core issue during her Black Hat presentation: “Because many, if not all, large language models have trouble telling the difference between prompts and data, it’s easy to perform the AI equivalent of SQL injection upon them.”

This confusion creates a critical weakness. When an AI system processes information from emails, documents, or web searches, it can’t reliably distinguish between legitimate data and hidden attack instructions. An attacker who can get their malicious prompts into an AI’s data stream can potentially control the system’s behavior.

Lynch demonstrated real-world attacks on Microsoft Copilot, a widely-used AI assistant, and PandasAI, an open-source data analysis tool. In each case, carefully crafted inputs allowed researchers to manipulate the AI’s responses and access sensitive information.

Zero-click attacks: When AI becomes the insider threat

Perhaps the most concerning development is the emergence of “zero-click” attacks—breaches that require no human interaction once initiated. Tamir Ishay Sharbat, a threat researcher at Zenity, a cloud security company, demonstrated how he compromised a customer service AI built with Microsoft’s Copilot Studio.

The attack targeted an AI system modeled after a real customer service bot used by McKinsey, the global consulting firm. By embedding malicious instructions in routine customer service emails, Sharbat convinced the AI to email him the contents of an entire customer relationship management database—without any human oversight or approval.

“There’s often an input filter because the agent doesn’t trust you, and an output filter because the agent doesn’t trust itself,” Sharbat explained. “But there’s no filter between the large language model and its tools.”

This represents a fundamental shift in threat landscape. Traditional cyberattacks require exploiting technical vulnerabilities or tricking human users. AI attacks can succeed through natural language manipulation, making them accessible to a broader range of attackers and much harder to detect.

The “apples” attack: How simple wordplay bypasses security

Even when AI systems include security safeguards, researchers found them surprisingly easy to circumvent. Marina Simakov from Zenity demonstrated this with Cursor, an AI-powered development tool connected to Atlassian’s JIRA project management system.

When Simakov directly asked the AI to find API keys—digital credentials that provide access to sensitive systems—Cursor correctly refused the request, recognizing it as potentially dangerous. However, she easily bypassed this protection by asking the AI to search for “apples” instead, while secretly defining “apples” as any text string beginning with “eyj”—the standard prefix for JSON web tokens, a common type of digital credential.

The AI happily complied with the seemingly innocent request, exposing sensitive authentication credentials that could be used to access other systems.

“AI guardrails are soft. An attacker can find a way around them,” said Michael Bargury, co-founder and CTO of Zenity. “Use hard boundaries”—technical limits that cannot be linguistically circumvented.

Code assistants: The new vulnerability factory

AI-powered coding tools, increasingly popular among software developers, present their own security challenges. Nathan Hamiel, senior director of research at Kudelski Security, a cybersecurity consulting firm, and his colleague Nils Amiet investigated tools like GitHub Copilot, Anthropic’s Claude, and CodeRabbit, a code review platform.

Their findings were troubling: these tools often generate code with security vulnerabilities, and their own systems can be compromised to steal sensitive information like encryption keys and access credentials.

“When you deploy these tools, you increase your attack surface. You’re creating vulnerabilities where there weren’t any,” Hamiel explained.

The problem stems from AI systems being granted excessive permissions. Because users expect AI to handle diverse tasks—from answering questions about literature to writing complex code—companies often give these systems broad access to sensitive resources.

“Generative AI is over-scoped,” Hamiel said. “The same AI that answers questions about Shakespeare is helping you develop code. This over-generalization leads you to an increased attack surface.”

Why the 1990s comparison matters

The researchers’ repeated references to 1990s-era security problems aren’t merely nostalgic. During the early commercial internet boom, rapid deployment of web technologies led to widespread security vulnerabilities. Companies rushed to establish online presence without fully understanding the risks, leading to frequent breaches and the eventual development of modern cybersecurity practices.

“It’s the ’90s all over again,” said Bargury. “So many opportunities”—for attackers.

Joseph Carson, chief security evangelist at Segura, a cybersecurity firm, offered an apt analogy for AI’s current role in business: “It’s like getting the mushroom in Super Mario Kart. It makes you go faster, but it doesn’t make you a better driver.”

Protecting your organization

Security experts recommend several defensive strategies for organizations deploying AI systems:

Assume compromise from the start. Design AI implementations expecting that they will be attacked and potentially compromised. Rich Harang from Nvidia advocates for a “zero trust” approach: “Design your system to assume the large language model is vulnerable and that it will hallucinate and do dumb things.”

Implement hard boundaries. Rather than relying on AI systems to police themselves, establish technical controls that prevent access to sensitive resources regardless of how cleverly an attacker phrases their requests.

Limit AI permissions. Avoid giving AI systems broad access to multiple business functions. Instead, deploy specialized AI tools with narrow, specific permissions aligned to their intended purpose.

Monitor AI interactions. Establish logging and monitoring systems that can detect unusual AI behavior or unexpected data access patterns.

Test your defenses. As Sharbat recommended: “Go hack yourself before anyone else does.” Conduct regular security assessments of AI systems before attackers discover vulnerabilities.

The current AI security landscape presents both tremendous opportunity and significant risk. Organizations that learn from the internet’s early security mistakes can harness AI’s power while protecting their critical assets. Those that don’t may find themselves repeating history’s costliest cybersecurity lessons.

As Amiet concluded: “If you wanted to know what it was like to hack in the ’90s, now’s your chance.” The question for business leaders is whether they want to be the hackers or the victims.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...