back
Get SIGNAL/NOISE in your inbox daily

The intersection of artificial intelligence and cybersecurity reaches a new milestone as Google leverages AI to uncover long-hidden software vulnerabilities.

Major breakthrough: Google has successfully employed an AI system to discover 26 software vulnerabilities, including a notable bug that remained hidden in OpenSSL for approximately 20 years.

  • The company utilized a ChatGPT-like AI tool to enhance its fuzz testing capabilities, a method that involves feeding random data into software to identify potential crashes
  • The AI-powered approach has been implemented across 272 software projects, demonstrating significant efficiency in vulnerability detection
  • The 20-year-old bug, designated as CVE-2024-9143, was found in OpenSSL, a crucial component for internet encryption and server authentication

Technical implementation: Google’s innovative approach combines traditional fuzz testing methodologies with advanced language models to automate and enhance the vulnerability detection process.

  • The AI system effectively mimics a developer’s workflow, including writing, testing, and iterating on fuzz targets
  • Large Language Models (LLMs) generate fuzz testing code, replacing the previous manual process conducted by human developers
  • The methodology proved particularly effective at discovering vulnerabilities in code that was previously considered thoroughly tested

Security implications: The discovered OpenSSL vulnerability, while classified as low severity, highlights the potential for AI to uncover hidden security issues in widely-used software.

  • The bug can trigger an “out-of-bounds memory access,” potentially causing program crashes
  • Despite its long presence in the code, the vulnerability posed minimal risk of executing dangerous processes
  • The discovery demonstrates that even well-vetted code can harbor unknown vulnerabilities that traditional testing methods might miss

Future developments: Google’s Open Source Security Team is advancing their AI-powered security initiatives with ambitious goals for automation and efficiency.

  • The team is developing capabilities for LLMs to automatically suggest patches for discovered bugs
  • Researchers aim to eliminate the need for human review in the vulnerability detection process
  • A parallel project called “Big Sleep” uses LLMs to simulate human security researcher workflows, recently identifying a previously unknown bug in SQLite

Looking ahead: While these developments mark significant progress in automated security testing, they also raise important questions about the future role of human oversight in cybersecurity and the potential for AI to reshape traditional security testing paradigms.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...