×
Philosopher: AI represents existential risk, just not the kind you think
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Artificial intelligence is increasingly being positioned as a potential moral arbiter and decision-maker, raising profound questions about human agency and ethical reasoning that philosopher Shannon Vallor addresses through the lens of existentialist philosophy and practical wisdom.

Core argument: Vallor contends that AI’s existential threat stems not from the technology itself, but from humanity’s misperception of AI as possessing genuine intelligence and moral authority.

  • Rather than being an independent thinking entity, AI functions more as a sophisticated mirror reflecting human inputs and biases
  • The widespread characterization of AI as capable of superior moral judgment represents a dangerous abdication of human responsibility and agency
  • This narrative effectively “gaslights” humans into surrendering their inherent capacity for moral reasoning and ethical decision-making

Philosophical framework: Drawing on existentialist philosophy, particularly José Ortega y Gasset’s concept of “autofabrication,” Vallor emphasizes humanity’s fundamental need to create meaning and shape identity through conscious choice.

  • Humans must continuously engage in the process of self-creation and meaning-making
  • The concept of “practical wisdom” (phronesis) requires active participation and cannot be outsourced to artificial systems
  • Over-reliance on AI for moral guidance can erode humans’ capacity for ethical reasoning and judgment

Critique of AI ethics: Vallor expresses deep skepticism toward efforts to create “moral machines” or ethical AI advisors.

  • Morality should remain a contested domain open to ongoing debate and challenge
  • The concept of universal morality applicable to both humans and machines fundamentally misunderstands the nature of ethical reasoning
  • Human morality is intrinsically tied to our existence as social, vulnerable, and interdependent beings

Transhumanism concerns: The philosophical movement seeking to transcend human limitations through technology faces pointed criticism in Vallor’s analysis.

  • Transhumanist ideology focuses on “freedom from” human constraints without articulating a meaningful vision of “freedom for” specific purposes
  • This approach risks diminishing rather than enhancing human agency and potential
  • The movement fails to recognize the essential role of human limitations in shaping moral understanding

Looking ahead: The growing integration of AI into decision-making processes across society demands careful consideration of how to preserve human agency and moral reasoning capabilities while avoiding the trap of technological deference.

Shannon Vallor says AI does present an existential risk — but not the one you think

Recent News

AI-powered computers are adding more time to workers’ tasks, but there’s a catch

Early AI PC adopters report spending more time on tasks than traditional computer users, signaling growing pains in the technology's implementation.

The global bootcamp that teaches intensive AI safety programming classes

Global bootcamp program trains next wave of AI safety professionals through intensive 10-day courses funded by Open Philanthropy.

‘Anti-scale’ and how to save journalism in an automated world

Struggling news organizations seek to balance AI adoption with growing public distrust, as the industry pivots toward community-focused journalism over content volume.