back
Get SIGNAL/NOISE in your inbox daily

Microsoft researchers have discovered that artificial intelligence can design toxins that evade biosecurity screening systems used to prevent the misuse of DNA sequences. The team, led by Microsoft’s chief scientist Eric Horvitz, successfully used generative AI to bypass protections designed to stop people from purchasing genetic sequences that could create deadly toxins or pathogens, revealing what they call a “zero day” vulnerability in current biosafety measures.

What you should know: Microsoft conducted a “red-teaming” exercise to test whether AI could help bioterrorists manufacture harmful proteins by circumventing existing safeguards.

  • The researchers used several generative protein models, including Microsoft’s own EvoDiff, to redesign toxins in ways that slip past screening software while maintaining their deadly function.
  • Commercial DNA vendors use screening software to compare incoming orders with known toxins or pathogens, but the AI-designed molecules could evade detection.
  • The exercise was entirely digital—no toxic proteins were actually produced to avoid any perception of bioweapons development.

The security implications: Current biosecurity screening systems have significant vulnerabilities that AI can exploit, creating an ongoing arms race between attackers and defenders.

  • Microsoft alerted the US government and software makers before publishing, leading to patches that remain incomplete.
  • “The patch is incomplete, and the state of the art is changing. But this isn’t a one-and-done thing. It’s the start of even more testing,” says Adam Clore, director of technology R&D at Integrated DNA Technologies, a large DNA manufacturer.
  • Some AI-designed molecules can still escape detection even after the patches.

Why this matters: The research highlights urgent gaps in biosecurity as AI becomes more sophisticated and accessible.

  • Generative AI algorithms that propose new protein shapes are already fueling drug discovery at well-funded startups like Generate Biomedicines and Isomorphic Labs, a Google spinout.
  • These same systems are “dual use”—capable of generating both beneficial molecules and harmful ones using their training data.
  • “This finding, combined with rapid advances in AI-enabled biological modeling, demonstrates the clear and urgent need for enhanced nucleic acid synthesis screening procedures,” says Dean Ball from the Foundation for American Innovation, a San Francisco think tank.

What experts are debating: Researchers disagree on whether DNA synthesis screening is the most effective defense against bad actors.

  • Michael Cohen from UC Berkeley believes there will always be ways to disguise sequences and argues for building biosecurity directly into AI systems.
  • Clore maintains that monitoring gene synthesis remains practical since DNA manufacture in the US is dominated by a few companies working closely with the government.
  • “If you have the resources to try to trick us into making a DNA sequence, you can probably train a large language model,” Clore notes about the widespread nature of AI technology.

Government response: President Trump called for an overhaul of DNA screening systems in a May executive order on biological research safety, though new recommendations haven’t been released yet.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...