×
Panda vs Eagle: existential risk and the need for US-China AI cooperation
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A critical perspective on the US-China AI race: The recent resharing of Leopold Aschenbrenner’s essay by Ivanka Trump has reignited discussions about artificial general intelligence (AGI) development and its geopolitical implications, particularly focusing on the potential race between the United States and China.

The argument for an AI arms race: Aschenbrenner’s essay suggests that AGI will be developed soon and advocates for the U.S. to accelerate its efforts to outpace China in this domain.

  • The essay argues that AGI could be a game-changing technology, potentially offering a decisive military advantage comparable to nuclear weapons.
  • Aschenbrenner frames the stakes in stark terms, suggesting that “the torch of liberty will not survive Xi getting AGI first.”

Challenging the arms race narrative: However, there are compelling reasons to question the wisdom of pursuing an aggressive AI arms race strategy.

  • Leading AI researchers and corporate leaders, including Sam Altman (OpenAI), Demis Hassabis (Google DeepMind), and Dario Amodei (Anthropic), have expressed concerns about the existential risks posed by AGI to humanity as a whole.
  • Prominent AI experts like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell have voiced skepticism about our ability to reliably control AGI systems.

The case for global cooperation: A cooperative approach to AI development between the U.S. and China may better serve both national and global interests.

  • If there’s a significant probability that rushing AGI development could pose an existential threat to humanity, including Americans, pursuing global cooperation and establishing limits on AI development might be a more prudent strategy.
  • An AI arms race could potentially destabilize the international system, with rival powers potentially resorting to preemptive military action to prevent perceived technological domination.

China’s perspective on AI safety: Contrary to some assumptions, there are indications that safety-minded voices in China are gaining traction in the AI development discourse.

  • The recent launch of a Chinese AI Safety Network, supported by major universities in Beijing and Shanghai, signals growing attention to AI safety concerns.
  • Prominent Chinese figures, including Turing Award winner Andrew Yao and Xue Lan, president of the state’s expert committee on AI governance, have warned about the potential threats of unchecked AI development.
  • Chinese President Xi Jinping has shown support for AI safety initiatives, as evidenced by his letter to Andrew Yao and the emphasis placed on AI risk at a recent party Central Committee meeting.

Recent progress in US-China AI cooperation: Despite ongoing geopolitical tensions, there have been promising developments in bilateral AI cooperation.

  • The Bletchley Park AI Safety Summit in November 2023 saw representatives from both countries sharing a stage.
  • Presidents Biden and Xi agreed to establish a bilateral channel on AI issues during their San Francisco summit.
  • Both nations participated in the AI Safety conference in South Korea in May 2024.

Opportunities for continued engagement: The coming months offer critical opportunities to maintain and strengthen US-China cooperation on AI safety.

  • The November San Francisco meeting between AI Safety Institutes and the Paris AI Action Summit in February 2025 present platforms for continued dialogue and collaboration.
  • These summits will address safety benchmarks, evaluations, and company obligations, some of which transcend geopolitical divisions.

Balancing competition and cooperation in AI development: The trajectory of AI development and its global impact will likely be shaped by the decisions made in the coming months.

  • While geopolitical tensions persist in areas such as Taiwan, industrial policy, and export controls, issues like AI safety demand a coordinated global response.
  • Engaging China in these discussions and empowering safety-minded voices within Beijing could be crucial in steering the global AI trajectory towards shared risk management rather than a potentially dangerous arms race.
Panda vs. Eagle - Future of Life Institute

Recent News

New to NotebookLM? Here’s what it does and where to get it

Google's free AI tool transforms written documents into two-voiced podcast conversations, signaling broader accessibility to audio content creation.

AI-generated coding is a big success, if you can navigate these risks

AI tools are accelerating software development timelines, but companies must balance speed with security and code quality standards.

The Google smart home ecosystem may get a big Gemini AI upgrade

The company is enhancing Google Assistant with its Gemini AI model to enable more natural conversations and complex task handling in smart homes.