×
Why we need a new ‘Statement on AI Risk’ and what it should accomplish
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The growing gap between acknowledged artificial intelligence risks and actual government investment in AI safety measures highlights a concerning disconnect in policy priorities.

The central proposal: A new “Statement on AI Inconsistency” aims to highlight the disparity between U.S. military spending and AI safety investment, given similar risk levels.

  • The proposed statement points out that while the U.S. spends $800 billion annually on military defense, it allocates less than $0.1 billion to AI alignment and safety research
  • This spending disparity exists despite artificial superintelligence (ASI) being considered as significant a threat as traditional military concerns
  • The statement is intended to succeed an earlier “Statement on AI Risk” with more specific policy implications

Risk assessment consensus: Multiple expert groups and the public share similar views about AI risks to humanity.

  • Superforecasters estimate a 2.1% chance of an AI catastrophe severe enough to kill 10% of humanity
  • AI experts project a 5-12% probability of such an event
  • Other expert groups and the general public consistently estimate around a 5% risk level
  • These assessments suggest AI poses a comparable threat level to traditional military concerns

Strategic rationale: The authors argue this new statement would be harder for governments to dismiss without meaningful action.

  • Unlike the previous Statement on AI Risk, governments couldn’t satisfy this call to action with token initiatives
  • The explicit comparison to military spending creates a clear benchmark for adequate investment
  • The statement builds on previously established concerns about AI risks, making it an incremental rather than radical position

Implementation challenges: The authors face significant hurdles in gaining institutional support.

  • They seek backing from established organizations like the Future of Life Institute or the Center for AI Safety
  • They acknowledge the need for organizational infrastructure to gather expert signatures

Analyzing the implications: The massive disparity between military and AI safety spending suggests institutional inertia may be preventing appropriate resource allocation to emerging threats, potentially leaving society vulnerable to new forms of risk that fall outside traditional defense frameworks.

A better “Statement on AI Risk?”

Recent News

Could automated journalism replace human journalism?

This experimental AI news site combines automation with journalistic principles, producing over 20 daily articles at just 30 cents each while maintaining factual accuracy and source credibility.

Biosecurity concerns mount as AI outperforms virus experts

AI systems demonstrate superior practical problem-solving in virology laboratories, posing a concerning dual-use scenario where the same capabilities accelerating medical breakthroughs could provide step-by-step guidance for harmful applications to those without scientific expertise.

How AI is transforming smartphone communication

AI capabilities are now being embedded directly into existing messaging platforms, eliminating the need for separate apps while maintaining conversational context for more efficient communication.