×
Google AI uncovers 20-year-old software bug
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The intersection of artificial intelligence and cybersecurity reaches a new milestone as Google leverages AI to uncover long-hidden software vulnerabilities.

Major breakthrough: Google has successfully employed an AI system to discover 26 software vulnerabilities, including a notable bug that remained hidden in OpenSSL for approximately 20 years.

  • The company utilized a ChatGPT-like AI tool to enhance its fuzz testing capabilities, a method that involves feeding random data into software to identify potential crashes
  • The AI-powered approach has been implemented across 272 software projects, demonstrating significant efficiency in vulnerability detection
  • The 20-year-old bug, designated as CVE-2024-9143, was found in OpenSSL, a crucial component for internet encryption and server authentication

Technical implementation: Google’s innovative approach combines traditional fuzz testing methodologies with advanced language models to automate and enhance the vulnerability detection process.

  • The AI system effectively mimics a developer’s workflow, including writing, testing, and iterating on fuzz targets
  • Large Language Models (LLMs) generate fuzz testing code, replacing the previous manual process conducted by human developers
  • The methodology proved particularly effective at discovering vulnerabilities in code that was previously considered thoroughly tested

Security implications: The discovered OpenSSL vulnerability, while classified as low severity, highlights the potential for AI to uncover hidden security issues in widely-used software.

  • The bug can trigger an “out-of-bounds memory access,” potentially causing program crashes
  • Despite its long presence in the code, the vulnerability posed minimal risk of executing dangerous processes
  • The discovery demonstrates that even well-vetted code can harbor unknown vulnerabilities that traditional testing methods might miss

Future developments: Google’s Open Source Security Team is advancing their AI-powered security initiatives with ambitious goals for automation and efficiency.

  • The team is developing capabilities for LLMs to automatically suggest patches for discovered bugs
  • Researchers aim to eliminate the need for human review in the vulnerability detection process
  • A parallel project called “Big Sleep” uses LLMs to simulate human security researcher workflows, recently identifying a previously unknown bug in SQLite

Looking ahead: While these developments mark significant progress in automated security testing, they also raise important questions about the future role of human oversight in cybersecurity and the potential for AI to reshape traditional security testing paradigms.

Google Uses AI to Discover 20-Year-Old Software Bug

Recent News

Dareesoft Tests AI Road Hazard Detection in Dubai

Dubai tests a vehicle-mounted AI system that detected over 2,000 road hazards in real-time, including potholes and fallen objects on city streets.

Samsung to Unveil Galaxy Ring 2 and AI-powered Wearables in January

Note: Without seeing the headline/article you're referring to, I'm unable to create an appropriate excerpt. Could you please provide the headline or article you'd like me to analyze?

What business leaders can learn from ServiceNow’s $11B ARR milestone

ServiceNow's steady 23% growth rate and high customer retention paint a rare picture of sustainable expansion in enterprise software while larger rivals struggle to maintain momentum.