×
Biosecurity concerns mount as AI outperforms virus experts
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI models now outperform PhD-level virologists in wet lab problem-solving, according to a groundbreaking new study shared exclusively with TIME. This development represents a significant double-edged sword for science and security: while these systems could accelerate medical breakthroughs and pandemic preparedness, they also potentially democratize bioweapon creation by providing expert-level guidance to individuals with malicious intent, regardless of their scientific background.

The big picture: AI models significantly outperformed human experts in a rigorous virology problem-solving test designed to measure practical lab troubleshooting abilities.

  • OpenAI‘s o3 model achieved 43.8% accuracy while Google’s Gemini 2.5 Pro scored 37.6% on the test, compared to human PhD-level virologists who averaged just 22.1% in their declared areas of expertise.
  • This marks a concerning milestone as non-experts now have unprecedented access to AI systems that can provide step-by-step guidance for complex virology procedures.

Why this matters: For the first time in history, virtually anyone has access to non-judgmental AI virology expertise that could potentially guide them through creating bioweapons.

  • The technology could accelerate legitimate medical and vaccine development while simultaneously increasing bioterrorism risks.

The researchers’ approach: The study was conducted by a multidisciplinary team from the Center for AI Safety, MIT’s Media Lab, Brazilian university UFABC, and pandemic prevention nonprofit SecureBio.

  • The researchers consulted virologists to create an extremely difficult practical assessment that measured the ability to troubleshoot complex laboratory protocols.
  • The test focused on real-world virology knowledge rather than theoretical understanding.

Voices of concern: Seth Donoughe, a research scientist at SecureBio and study co-author, expressed alarm about the dual-use implications of these AI capabilities.

  • Experts like Dan Hendrycks and Tom Inglesby are urging AI companies to implement robust safeguards before these models become widely available.

Proposed safeguards: Security experts recommend multiple measures to mitigate potential misuse while preserving beneficial applications.

  • Suggested protections include gated access to advanced models, input and output filtering systems, and more rigorous testing before new models are released.
  • The challenge lies in balancing scientific advancement with responsible AI development in sensitive domains.
Exclusive: AI Bests Virus Experts, Raising Biohazard Fears

Recent News

Google AI boosts subscription service to 150 million users

Google One subscription growth signals consumers' willingness to pay for premium AI features as Alphabet diversifies beyond its advertising-dependent business model.

House GOP seeks 10-year freeze on state AI regulations

The proposal would block all state AI regulations for a decade, sparking debate over whether this centralizes power or prevents regulatory fragmentation.

AI-powered CIAM solutions speed up enterprise LLM integration

CIAM platforms are addressing critical identity and authentication barriers that have slowed enterprise deployment of AI agents across business applications.