back
Get SIGNAL/NOISE in your inbox daily

Turing Award winners have issued a stark warning about AI development practices, highlighting a growing rift between responsible engineering and commercial incentives in the fast-moving artificial intelligence industry. The recognition of reinforcement learning pioneers comes at a critical moment when AI safety concerns are being voiced by an increasing number of industry leaders and researchers, including previous Turing recipients, emphasizing the need for more rigorous testing and safeguards before releasing powerful AI systems to millions of users.

The big picture: Reinforcement learning pioneers Andrew Barto and Richard Sutton received the prestigious $1 million Turing Award while using their platform to criticize inadequate safety practices in commercial AI development.

  • Both scientists condemned the current industry approach of releasing AI systems without thorough testing, with Barto comparing it to “building a bridge and testing it by having people use it.”
  • The technique they developed trains AI systems to make optimized decisions through trial and error and has become what Google’s Jeff Dean calls “a lynchpin of progress in AI” that underpins breakthrough models like ChatGPT and AlphaGo.

Why this matters: The criticism from these respected scientists adds significant weight to growing concerns about AI safety coming from within the technical community itself.

  • Their warnings align with similar concerns expressed by other AI pioneers and Turing Award winners Yoshua Bengio and Geoffrey Hinton, creating a pattern of the field’s most decorated researchers speaking out about development practices.
  • These warnings come as companies like OpenAI are shifting toward more commercial models despite previously acknowledging extinction-level risks from advanced AI.

What they’re saying: “Releasing software to millions of people without safeguards is not good engineering practice,” Barto told The Financial Times.

  • Barto further criticized that proper engineering practices that “evolved to try to mitigate the negative consequences of technology” are not being followed by AI companies.
  • He specifically called out AI companies for being “motivated by business incentives” rather than prioritizing research advancement and safety.

Behind the numbers: The $1 million Turing Award, often described as computing’s equivalent to the Nobel Prize, represents the highest honor in computer science, giving substantial credibility to the recipients’ warnings.

The broader context: The scientists’ warnings follow an industry pattern of prioritizing rapid deployment over safety assurances.

  • In 2023, a group of leading AI researchers, engineers, and executives including OpenAI CEO Sam Altman signed a statement warning that “mitigating the risk of extinction from AI should be a global priority.”
  • Despite such warnings, OpenAI announced plans in December to transform into a for-profit company after briefly removing Altman partly for “over commercializing advances before understanding the consequences.”

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...