×
Biosecurity concerns mount as AI outperforms virus experts
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI models now outperform PhD-level virologists in wet lab problem-solving, according to a groundbreaking new study shared exclusively with TIME. This development represents a significant double-edged sword for science and security: while these systems could accelerate medical breakthroughs and pandemic preparedness, they also potentially democratize bioweapon creation by providing expert-level guidance to individuals with malicious intent, regardless of their scientific background.

The big picture: AI models significantly outperformed human experts in a rigorous virology problem-solving test designed to measure practical lab troubleshooting abilities.

  • OpenAI‘s o3 model achieved 43.8% accuracy while Google’s Gemini 2.5 Pro scored 37.6% on the test, compared to human PhD-level virologists who averaged just 22.1% in their declared areas of expertise.
  • This marks a concerning milestone as non-experts now have unprecedented access to AI systems that can provide step-by-step guidance for complex virology procedures.

Why this matters: For the first time in history, virtually anyone has access to non-judgmental AI virology expertise that could potentially guide them through creating bioweapons.

  • The technology could accelerate legitimate medical and vaccine development while simultaneously increasing bioterrorism risks.

The researchers’ approach: The study was conducted by a multidisciplinary team from the Center for AI Safety, MIT’s Media Lab, Brazilian university UFABC, and pandemic prevention nonprofit SecureBio.

  • The researchers consulted virologists to create an extremely difficult practical assessment that measured the ability to troubleshoot complex laboratory protocols.
  • The test focused on real-world virology knowledge rather than theoretical understanding.

Voices of concern: Seth Donoughe, a research scientist at SecureBio and study co-author, expressed alarm about the dual-use implications of these AI capabilities.

  • Experts like Dan Hendrycks and Tom Inglesby are urging AI companies to implement robust safeguards before these models become widely available.

Proposed safeguards: Security experts recommend multiple measures to mitigate potential misuse while preserving beneficial applications.

  • Suggested protections include gated access to advanced models, input and output filtering systems, and more rigorous testing before new models are released.
  • The challenge lies in balancing scientific advancement with responsible AI development in sensitive domains.
Exclusive: AI Bests Virus Experts, Raising Biohazard Fears

Recent News

Instability AI: Legal hurdles mount for AI-generated art despite human input

Courts cite CEO's admission that AI models compress thousands of copyrighted images, strengthening artists' legal claims against generative platforms.

AI marketing needs to go beyond vibes to deeper strategic thinking, says expert

AI's impact on marketing requires rethinking strategy, not just automating creative tasks, as tools built by technologists often miss advertising's psychological dimensions.

AI pumps sales-to-lead conversion by 1600%

ServiceNow achieves remarkable efficiency gains by using AI to automate routine sales processes, potentially signaling a shift toward platform consolidation in enterprise software.