×
Straining to keep up? AI safety teams lag behind rapid tech advancements
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Major AI companies like OpenAI and Google have significantly reduced their safety testing protocols despite developing increasingly powerful models, raising serious concerns about the industry’s commitment to security. This shift away from rigorous safety evaluation comes as competitive pressures intensify in the AI industry, with companies seemingly prioritizing market advantage over comprehensive risk assessment—a concerning development as these systems become more capable and potentially consequential.

The big picture: OpenAI has dramatically shortened its safety testing timeframe from months to days before releasing new models, while simultaneously dropping assessments for mass manipulation and disinformation risks.

  • Financial Times reports that testers of OpenAI’s o3 model were given only days to evaluate systems that previously would have undergone months of safety testing.
  • One tester told the Financial Times: “We had more thorough safety testing when [the technology] was less important.”

Industry pattern: OpenAI’s safety shortcuts appear to be part of a broader industry trend, with other major AI developers following similar paths.

  • Neither Google’s new Gemini Pro 2.5 nor Meta’s new Llama 4 models were released with comprehensive safety details in their technical reports and evaluations.
  • These developments represent a significant regression in safety protocols despite the increasing capabilities of AI systems.

Why it’s happening: Fortune journalist Jeremy Kahn attributes this industry-wide shift to intense market competition, with companies viewing thorough safety testing as a competitive disadvantage.

  • “The reason… is clear: Competition between AI companies is intense and those companies perceive safety testing as an impediment to speeding new models to market,” Kahn wrote.

What else they’re covering: The newsletter mentions several other initiatives including a “Worldbuilding Hopeful Futures with AI” course, a Digital Media Accelerator program accepting applications, and various new AI publications.

Future of Life Institute Newsletter: Where are the safety teams?

Recent News

Gemini AI powers smarter automation and camera features in Google Home

Gemini AI now enables natural language creation of smart home routines and enhances camera functionality with searchable video content and automated monitoring.

Somerset Council trials AI to speed up special educational needs reports

AI automation allows Somerset caseworkers to reduce paperwork and spend more time directly helping children with special needs while maintaining human oversight of final decisions.

Surfshark report reveals alarming data collection by AI chatbots

Popular AI chatbots collect vast amounts of personal data, with Meta AI harvesting 90% of possible data types while others collect significantly less for the same functionality.