×
Biosecurity concerns mount as AI outperforms virus experts
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI models now outperform PhD-level virologists in wet lab problem-solving, according to a groundbreaking new study shared exclusively with TIME. This development represents a significant double-edged sword for science and security: while these systems could accelerate medical breakthroughs and pandemic preparedness, they also potentially democratize bioweapon creation by providing expert-level guidance to individuals with malicious intent, regardless of their scientific background.

The big picture: AI models significantly outperformed human experts in a rigorous virology problem-solving test designed to measure practical lab troubleshooting abilities.

  • OpenAI‘s o3 model achieved 43.8% accuracy while Google’s Gemini 2.5 Pro scored 37.6% on the test, compared to human PhD-level virologists who averaged just 22.1% in their declared areas of expertise.
  • This marks a concerning milestone as non-experts now have unprecedented access to AI systems that can provide step-by-step guidance for complex virology procedures.

Why this matters: For the first time in history, virtually anyone has access to non-judgmental AI virology expertise that could potentially guide them through creating bioweapons.

  • The technology could accelerate legitimate medical and vaccine development while simultaneously increasing bioterrorism risks.

The researchers’ approach: The study was conducted by a multidisciplinary team from the Center for AI Safety, MIT’s Media Lab, Brazilian university UFABC, and pandemic prevention nonprofit SecureBio.

  • The researchers consulted virologists to create an extremely difficult practical assessment that measured the ability to troubleshoot complex laboratory protocols.
  • The test focused on real-world virology knowledge rather than theoretical understanding.

Voices of concern: Seth Donoughe, a research scientist at SecureBio and study co-author, expressed alarm about the dual-use implications of these AI capabilities.

  • Experts like Dan Hendrycks and Tom Inglesby are urging AI companies to implement robust safeguards before these models become widely available.

Proposed safeguards: Security experts recommend multiple measures to mitigate potential misuse while preserving beneficial applications.

  • Suggested protections include gated access to advanced models, input and output filtering systems, and more rigorous testing before new models are released.
  • The challenge lies in balancing scientific advancement with responsible AI development in sensitive domains.
Exclusive: AI Bests Virus Experts, Raising Biohazard Fears

Recent News

Motorola’s new Razr phones debut Moto AI and Perplexity features

Motorola's new Razr lineup integrates Perplexity AI search and productivity features that summarize notifications, record meetings, and predict user app preferences.

Google pays Samsung billions to preinstall Gemini AI

Google continues Samsung partnership strategy with billions paid to make Gemini the default AI assistant, even as it faces antitrust scrutiny for similar search deals.

AI etiquette debate grows as users question politeness to chatbots

Unnecessary polite language costs AI companies millions in processing power while raising questions about appropriate human-machine boundaries.