Turing Award winners have issued a stark warning about AI development practices, highlighting a growing rift between responsible engineering and commercial incentives in the fast-moving artificial intelligence industry. The recognition of reinforcement learning pioneers comes at a critical moment when AI safety concerns are being voiced by an increasing number of industry leaders and researchers, including previous Turing recipients, emphasizing the need for more rigorous testing and safeguards before releasing powerful AI systems to millions of users.
The big picture: Reinforcement learning pioneers Andrew Barto and Richard Sutton received the prestigious $1 million Turing Award while using their platform to criticize inadequate safety practices in commercial AI development.
- Both scientists condemned the current industry approach of releasing AI systems without thorough testing, with Barto comparing it to “building a bridge and testing it by having people use it.”
- The technique they developed trains AI systems to make optimized decisions through trial and error and has become what Google’s Jeff Dean calls “a lynchpin of progress in AI” that underpins breakthrough models like ChatGPT and AlphaGo.
Why this matters: The criticism from these respected scientists adds significant weight to growing concerns about AI safety coming from within the technical community itself.
- Their warnings align with similar concerns expressed by other AI pioneers and Turing Award winners Yoshua Bengio and Geoffrey Hinton, creating a pattern of the field’s most decorated researchers speaking out about development practices.
- These warnings come as companies like OpenAI are shifting toward more commercial models despite previously acknowledging extinction-level risks from advanced AI.
What they’re saying: “Releasing software to millions of people without safeguards is not good engineering practice,” Barto told The Financial Times.
- Barto further criticized that proper engineering practices that “evolved to try to mitigate the negative consequences of technology” are not being followed by AI companies.
- He specifically called out AI companies for being “motivated by business incentives” rather than prioritizing research advancement and safety.
Behind the numbers: The $1 million Turing Award, often described as computing’s equivalent to the Nobel Prize, represents the highest honor in computer science, giving substantial credibility to the recipients’ warnings.
The broader context: The scientists’ warnings follow an industry pattern of prioritizing rapid deployment over safety assurances.
- In 2023, a group of leading AI researchers, engineers, and executives including OpenAI CEO Sam Altman signed a statement warning that “mitigating the risk of extinction from AI should be a global priority.”
- Despite such warnings, OpenAI announced plans in December to transform into a for-profit company after briefly removing Altman partly for “over commercializing advances before understanding the consequences.”
Latest Turing Award winners again warn of AI dangers