back
Get SIGNAL/NOISE in your inbox daily

Elizabeth Kelly, director of the US AI Safety Institute and a key figure in US artificial intelligence policy, is departing her role amid potential shifts in the organization’s direction under the Trump administration.

Leadership transition details: Elizabeth Kelly’s imminent departure from the US AI Safety Institute marks a significant change in leadership for one of the government’s primary artificial intelligence oversight bodies.

  • Kelly, who has served as the institute’s director and represented US AI policy internationally, will step down by the end of the week
  • The timing of her exit coincides with broader changes in the organization under the current administration
  • Sources familiar with the matter, speaking anonymously, confirmed the leadership change

Institutional implications: The leadership vacancy creates uncertainty for an organization that plays a crucial role in shaping and monitoring artificial intelligence development in the United States.

  • The US AI Safety Institute serves as a primary federal body focused on artificial intelligence safety and oversight
  • Kelly’s departure may signal potential shifts in how the institute approaches AI governance and safety measures
  • The change in leadership comes at a critical time when AI safety and regulation are increasingly important national priorities

Administrative context: The transition occurs as the Trump administration potentially recalibrates its approach to artificial intelligence oversight and safety measures.

  • Kelly’s role as an international representative of US AI policy adds weight to the significance of this leadership change
  • The departure raises questions about potential new directions in US AI policy and international engagement
  • The timing suggests possible broader changes in how the administration plans to approach AI governance

Looking ahead: The selection of new leadership for the US AI Safety Institute will likely signal the administration’s priorities and approach to AI oversight, with implications for both domestic policy and international cooperation in AI governance.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...