×
Auto-animosity: Former Facebook VIP warns AI will turn cybersecurity into machine-vs-machine combat
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Former Facebook CISO Alex Stamos warns that the cybersecurity landscape will be fundamentally transformed by AI, with machines soon fighting automated battles while humans supervise. Speaking at the HumanX conference in Las Vegas, he emphasized that 95% of AI system vulnerabilities haven’t even been discovered yet, pointing to a future where financially-motivated attackers will increasingly leverage AI to create sophisticated and previously impossible threats.

The big picture: Security operations are shifting toward AI-automated monitoring and analysis, with human decisions potentially being removed from the defensive loop entirely as attackers adopt similar automation.

  • Stamos identifies three distinct AI security issues often mistakenly conflated: traditional security breaches, safety failures causing human harm, and alignment failures where systems independently malfunction.
  • North Korea‘s Lazarus Group, which recently stole $1.4 billion in cryptocurrency, exemplifies the financially-motivated attackers likely to adopt AI most aggressively.
  • Unlike traditional state-sponsored hackers who must avoid detection, ransomware groups can target thousands of systems with less concern about stealth, making AI automation particularly attractive.

What they’re saying: “When it comes to AI and cyber, we’re not even close to 1% done here,” Stamos warned the conference audience.

  • “The future for cyber is human beings supervising machine-to-machine combat,” he predicted, describing how AI is already transforming both defensive and offensive security operations.
  • On AI-enhanced ransomware operations: “They like to use AI for negotiations because it turns out a strung-out Russian 19-year-old neither has great language skills or good negotiation skills, right?”

Why this matters: Commercially available AI tools are already being used to build new malware, undermining the security industry’s assumption that most attackers rely on known malware from black markets.

  • Stamos demonstrated how Microsoft‘s Copilot could generate malware components: “You can’t ask it, ‘write me a Windows worm,’ but what you can do is you can ask it for all the parts you need for a Windows worm.”
  • This capability allows anyone with basic programming knowledge to create novel threats that evade traditional detection systems.

Between the lines: Despite predicting AI-automated defenses, Stamos paradoxically advises against trusting AI with security-sensitive decisions.

  • He specifically warns against configurations where AI systems bridge high-privilege operations and low-privilege users—creating potential security vulnerabilities.
  • His contradictory guidance reflects the cybersecurity industry’s struggle to balance AI’s defensive potential against its inherent risks.

The bottom line: Stamos describes information security as “the only part of computer science that gets worse every year,” advising those concerned about job displacement to consider cybersecurity careers—a field he believes will face increasing challenges rather than AI-driven solutions.

Ex-Facebook CISO Warns: 95% of Bugs in Your AI System Haven't Been Invented Yet

Recent News

AI’s impact on productivity: Strategies to avoid complacency

Maintaining active thinking habits while using AI tools can prevent cognitive complacency without sacrificing productivity gains.

OpenAI launches GPT-4 Turbo with enhanced capabilities

New GPT-4.1 model expands context window to one million tokens while reducing costs by 26 percent compared to its predecessor, addressing efficiency concerns from developers.

AI models struggle with basic physical tasks in manufacturing

Leading AI systems fail at basic manufacturing tasks that human machinists routinely complete, highlighting a potential future where knowledge work becomes automated while physical jobs remain protected from AI disruption.