AI models now outperform PhD-level virologists in wet lab problem-solving, according to a groundbreaking new study shared exclusively with TIME. This development represents a significant double-edged sword for science and security: while these systems could accelerate medical breakthroughs and pandemic preparedness, they also potentially democratize bioweapon creation by providing expert-level guidance to individuals with malicious intent, regardless of their scientific background.
The big picture: AI models significantly outperformed human experts in a rigorous virology problem-solving test designed to measure practical lab troubleshooting abilities.
Why this matters: For the first time in history, virtually anyone has access to non-judgmental AI virology expertise that could potentially guide them through creating bioweapons.
The researchers’ approach: The study was conducted by a multidisciplinary team from the Center for AI Safety, MIT’s Media Lab, Brazilian university UFABC, and pandemic prevention nonprofit SecureBio.
Voices of concern: Seth Donoughe, a research scientist at SecureBio and study co-author, expressed alarm about the dual-use implications of these AI capabilities.
Proposed safeguards: Security experts recommend multiple measures to mitigate potential misuse while preserving beneficial applications.