back
Get SIGNAL/NOISE in your inbox daily

Sixty U.K. lawmakers have accused Google DeepMind of violating international AI safety pledges in an open letter organized by activist group PauseAI U.K. The cross-party coalition claims Google’s March release of Gemini 2.5 Pro without proper safety testing details “sets a dangerous precedent” and undermines commitments to responsible AI development.

What you should know: Google DeepMind failed to provide pre-deployment access to Gemini 2.5 Pro to the U.K. AI Safety Institute, breaking established safety protocols.

  • TIME confirmed for the first time that Google DeepMind did not share the model with the U.K. AI Safety Institute before its March 25 release.
  • The company only provided access to the safety institute after the model was already publicly available.
  • This contradicts international agreements requiring AI companies to allow government safety testing before deployment.

Who’s involved: The letter includes prominent figures from across the political spectrum calling for accountability.

  • Signatories include digital rights campaigner Baroness Beeban Kidron and former Defence Secretary Des Browne.
  • The initiative was coordinated by PauseAI U.K., an activist organization focused on AI safety, with members directly contacting their MPs to request signatures.
  • The letter was addressed to Demis Hassabis, CEO of Google DeepMind.

Google’s response: The company claims it did share the model with safety experts, but only after public release.

  • A Google DeepMind spokesperson told TIME that the company shared Gemini 2.5 Pro with the U.K. AI Safety Institute and “a diverse group of external experts.”
  • These external experts included Apollo Research, Dreadnode, and Vaultis.
  • However, Google confirmed this sharing occurred only after the March 25 public release date.

Why this matters: The controversy highlights growing tensions between AI companies and regulators over safety commitments.

  • The lawmakers argue that releasing models without prior safety testing undermines international efforts to ensure responsible AI development.
  • This case could set a precedent for how strictly governments enforce AI safety pledges from major tech companies.
  • The incident raises questions about whether voluntary commitments from AI companies are sufficient to ensure public safety.

The big picture: This dispute reflects broader challenges in AI governance as companies race to deploy increasingly powerful models while governments struggle to establish effective oversight mechanisms.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...