back
Get SIGNAL/NOISE in your inbox daily

Nuclear weapons experts gathered at the University of Chicago in July are unanimous that artificial intelligence will inevitably become integrated into nuclear weapons systems, though none can predict exactly how this integration will unfold. The consensus among Nobel laureates, scientists, and former government officials underscores a critical shift in global security as AI permeates the most dangerous weapons on Earth.

What you should know: While experts agree AI integration is inevitable, they remain united in opposing AI control over nuclear launch decisions.

  • “In this realm, almost everybody says we want effective human control over nuclear weapon decisionmaking,” says Jon Wolfsthal, a nonproliferation expert and former Obama administration official.
  • Current nuclear launch protocols require multiple human decisions and physical actions, including two operators turning keys simultaneously in missile silos.
  • No expert believes large language models like ChatGPT will receive nuclear codes anytime soon.

The big picture: AI is already being considered for nuclear command and control systems, with military leaders actively pursuing AI-enabled decision support tools.

  • Air Force General Anthony J. Cotton announced last year that nuclear forces are “developing artificial intelligence or AI-enabled, human led, decision support tools to ensure our leaders are able to respond to complex, time-sensitive scenarios.”
  • Bob Latiff, a retired US Air Force major general who helps set the Doomsday Clock, compares AI’s spread to electricity: “It’s going to find its way into everything.”

Key concerns: Experts worry about AI creating vulnerabilities rather than improving nuclear security.

  • Wolfsthal’s primary concern isn’t rogue AI starting wars, but rather that “somebody will say we need to automate this system and parts of it, and that will create vulnerabilities that an adversary can exploit.”
  • AI systems operating as “black boxes” make it impossible to understand their decision-making processes, which experts consider unacceptable for nuclear weapons.
  • Current US nuclear policy requires “dual phenomenology”—confirmation from both satellite and radar systems—to verify nuclear attacks, and experts question whether AI should fulfill either role.

The human element: Nuclear experts emphasize the irreplaceable value of human judgment in nuclear decisions.

  • Stanford professor Herb Lin references Stanislav Petrov, the Soviet officer who prevented nuclear war in 1983 by questioning his computer systems and choosing not to report a false alarm.
  • “Can we expect humans to be able to do that routinely? Is that a fair expectation?” Lin asks, noting that AI cannot “go outside your training data” to make such judgment calls.
  • Latiff worries about AI reinforcing confirmation bias and reducing meaningful human control: “If Johnny gets killed, who do I blame?”

What they’re saying: Experts express frustration with current AI rhetoric and policy approaches.

  • “The conversation about AI and nukes is hampered by a couple of major problems. The first is that nobody really knows what AI is,” Wolfsthal explains.
  • Lin criticizes the Pentagon’s comparison of AI development to the Manhattan Project: “I think it’s awful. For one thing, I knew when the Manhattan Project was done, and I could tell you when it was a success, right? We exploded a nuclear weapon. I don’t know what it means to have a Manhattan Project for AI.”

Policy implications: The Trump administration and Pentagon have positioned AI as a national security priority, framing development as an arms race against China.

  • The Department of Energy declared in May that “AI is the next Manhattan Project, and the UNITED STATES WILL WIN.”
  • This competitive framing concerns experts who emphasize the need for careful consideration over speed in nuclear weapons integration.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...