back
Get SIGNAL/NOISE in your inbox daily

A critical perspective on the US-China AI race: The recent resharing of Leopold Aschenbrenner’s essay by Ivanka Trump has reignited discussions about artificial general intelligence (AGI) development and its geopolitical implications, particularly focusing on the potential race between the United States and China.

The argument for an AI arms race: Aschenbrenner’s essay suggests that AGI will be developed soon and advocates for the U.S. to accelerate its efforts to outpace China in this domain.

  • The essay argues that AGI could be a game-changing technology, potentially offering a decisive military advantage comparable to nuclear weapons.
  • Aschenbrenner frames the stakes in stark terms, suggesting that “the torch of liberty will not survive Xi getting AGI first.”

Challenging the arms race narrative: However, there are compelling reasons to question the wisdom of pursuing an aggressive AI arms race strategy.

  • Leading AI researchers and corporate leaders, including Sam Altman (OpenAI), Demis Hassabis (Google DeepMind), and Dario Amodei (Anthropic), have expressed concerns about the existential risks posed by AGI to humanity as a whole.
  • Prominent AI experts like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell have voiced skepticism about our ability to reliably control AGI systems.

The case for global cooperation: A cooperative approach to AI development between the U.S. and China may better serve both national and global interests.

  • If there’s a significant probability that rushing AGI development could pose an existential threat to humanity, including Americans, pursuing global cooperation and establishing limits on AI development might be a more prudent strategy.
  • An AI arms race could potentially destabilize the international system, with rival powers potentially resorting to preemptive military action to prevent perceived technological domination.

China’s perspective on AI safety: Contrary to some assumptions, there are indications that safety-minded voices in China are gaining traction in the AI development discourse.

  • The recent launch of a Chinese AI Safety Network, supported by major universities in Beijing and Shanghai, signals growing attention to AI safety concerns.
  • Prominent Chinese figures, including Turing Award winner Andrew Yao and Xue Lan, president of the state’s expert committee on AI governance, have warned about the potential threats of unchecked AI development.
  • Chinese President Xi Jinping has shown support for AI safety initiatives, as evidenced by his letter to Andrew Yao and the emphasis placed on AI risk at a recent party Central Committee meeting.

Recent progress in US-China AI cooperation: Despite ongoing geopolitical tensions, there have been promising developments in bilateral AI cooperation.

  • The Bletchley Park AI Safety Summit in November 2023 saw representatives from both countries sharing a stage.
  • Presidents Biden and Xi agreed to establish a bilateral channel on AI issues during their San Francisco summit.
  • Both nations participated in the AI Safety conference in South Korea in May 2024.

Opportunities for continued engagement: The coming months offer critical opportunities to maintain and strengthen US-China cooperation on AI safety.

  • The November San Francisco meeting between AI Safety Institutes and the Paris AI Action Summit in February 2025 present platforms for continued dialogue and collaboration.
  • These summits will address safety benchmarks, evaluations, and company obligations, some of which transcend geopolitical divisions.

Balancing competition and cooperation in AI development: The trajectory of AI development and its global impact will likely be shaped by the decisions made in the coming months.

  • While geopolitical tensions persist in areas such as Taiwan, industrial policy, and export controls, issues like AI safety demand a coordinated global response.
  • Engaging China in these discussions and empowering safety-minded voices within Beijing could be crucial in steering the global AI trajectory towards shared risk management rather than a potentially dangerous arms race.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...