In a recent blog post, Arvind Narayanan and Sayash Kapoor argue that forecasts of existential risk from AI are based on speculation and pseudo-quantification rather than sound evidence or methodology.
Key issues with AI existential risk forecasting: The article identifies several reasons why current AI existential risk probability estimates are unreliable and unsuitable for guiding policy:
- Inductive probability estimation is unreliable due to the lack of a suitable reference class, as an AI-driven human extinction event would be unprecedented and dissimilar to any past events.
- Deductive probability estimation is unreliable due to the lack of a well-established theory or model for predicting the likelihood of developing superintelligent AI or losing control over such AI.
- Subjective probability estimates vary widely among experts and are essentially guesses dressed up as numbers, lacking a rigorous inductive or deductive basis.
Challenges in assessing forecast skill: The article argues that it is virtually impossible to assess the skill of forecasters when it comes to AI existential risk, due to several factors:
- The lack of a reference class makes it difficult to determine whether a forecaster’s skill in other domains would translate to AI existential risk.
- The low base rate of existential risks and the long time horizons involved make it challenging to evaluate the accuracy of forecasts.
- Scoring rules used to assess forecast skill are insensitive to the overestimation of tail risks, allowing inflated estimates of low-probability events to go unpenalized.
Potential biases in risk estimates: The authors suggest that there may be systematic biases leading to the overestimation of AI existential risk, including:
- Selection bias among AI researchers and forecasting experts, who may be more inclined to believe in the transformative potential and risks of AI.
- Echo chamber effects within the AI safety community, where high estimates of the probability of AI doom have become a way to signal commitment to the cause.
- Incentives to err on the side of higher estimates when faced with uncertainty, due to the asymmetric penalties of scoring rules.
Pitfalls of utility maximization: The article cautions against using cost-benefit analysis based on existential risk probabilities to guide policy, as it can lead to Pascal’s Wager-like conclusions that justify extreme measures based on highly speculative estimates of low-probability but high-consequence events.
Recommendations for policymakers: Rather than relying on unreliable existential risk forecasts, the authors suggest that policymakers should:
- Adopt policies that are compatible with a range of possible estimates of AI risk and are beneficial even if the risk is negligible.
- Focus on forecasting AI milestones and benchmarks that are more clearly defined and measurable, while recognizing the limitations of current benchmarks in predicting real-world impacts.
- Demand clear explanations of the evidence and methodology behind any risk estimates used to inform policy, rather than accepting subjective probabilities at face value.
Concluding thoughts: The article emphasizes the need for an evidence-based approach to AI safety that stays grounded in reality while acknowledging the possibility of unknown risks. The authors argue that policies aimed at restricting AI development are unnecessary and potentially counterproductive, and call for a more nuanced and adaptable approach to managing the societal impacts of AI.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...