Major AI companies like OpenAI and Google have significantly reduced their safety testing protocols despite developing increasingly powerful models, raising serious concerns about the industry’s commitment to security. This shift away from rigorous safety evaluation comes as competitive pressures intensify in the AI industry, with companies seemingly prioritizing market advantage over comprehensive risk assessment—a concerning development as these systems become more capable and potentially consequential.
The big picture: OpenAI has dramatically shortened its safety testing timeframe from months to days before releasing new models, while simultaneously dropping assessments for mass manipulation and disinformation risks.
Industry pattern: OpenAI’s safety shortcuts appear to be part of a broader industry trend, with other major AI developers following similar paths.
Why it’s happening: Fortune journalist Jeremy Kahn attributes this industry-wide shift to intense market competition, with companies viewing thorough safety testing as a competitive disadvantage.
What else they’re covering: The newsletter mentions several other initiatives including a “Worldbuilding Hopeful Futures with AI” course, a Digital Media Accelerator program accepting applications, and various new AI publications.