×
AI’s utilitarian focus obscures the unique value of human continuity, says critic
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The concept of AI successionism presents a profound ethical dilemma that extends beyond technical AI alignment into fundamental questions about human continuity and species preference. Nina Panickssery challenges the utilitarian framework that dominates much of AI ethics discourse by articulating a compelling counterargument based not on abstract moral calculus but on an inherent biological preference for our own kind—a perspective that significantly reframes debates about humanity’s relationship with increasingly capable artificial intelligence.

The big picture: Panickssery rejects “successionism,” the view that humans should willingly cede our future to more advanced AI beings, arguing instead for the legitimacy of preferring human continuity despite potential utilitarian arguments.

  • Her position doesn’t dispute that artificial beings might someday exceed humans in intelligence or capability, but rather questions whether such superiority creates a moral imperative for humanity’s replacement.
  • This stance directly challenges influential thinking within segments of the AI alignment community who suggest a “sufficiently aligned” superintelligence could justifiably supersede humanity.

Why this matters: This perspective significantly reframes AI alignment discussions by shifting focus from purely utilitarian calculations to questions of species identity, continuity, and biological preference.

  • The debate touches on existential questions about what we value in humanity’s future beyond maximizing abstract “utility.”
  • It highlights a potential disconnect between certain philosophical frameworks in AI safety and deeply held human intuitions about species continuity.

The core argument: Panickssery’s opposition to AI successionism stems from an inherent preference for her own kind rather than from arbitrary moral definitions that privilege biological humans.

  • She draws a parallel to familial preference, noting that most people wouldn’t voluntarily replace their family members with “smarter, kinder, happier people” despite potential utilitarian benefits.
  • This biologically-grounded preference represents a fundamental value that successionist philosophies often overlook or dismiss.

Important distinctions: Panickssery differentiates between gradual human evolution and abrupt replacement scenarios.

  • She accepts gradual improvement where “each generation endorses and comes of the next” as a natural evolutionary process.
  • This stands in contrast to what she characterizes as a replacement scenario, comparing it to “chimps raising llamas for meat, the llamas eventually became really smart and morally good, peacefully sterilized the chimps, and took over the planet.”

Reading between the lines: While accepting that advanced AI might pose relatively low existential risk, Panickssery expresses concern about how alignment discourse has normalized the concept of human replacement.

  • Her position suggests that preserving humanity’s future should be a core value in AI alignment work rather than treating it as negotiable.
  • This represents a meaningful divergence from purely consequentialist frameworks that dominate much of AI safety thinking.
Why I am not a successionist

Recent News

AI courses from Google, Microsoft and more boost skills and résumés for free

As AI becomes critical to business decision-making, professionals can enhance their marketability with free courses teaching essential concepts and applications without requiring technical backgrounds.

Veo 3 brings audio to AI video and tackles the Will Smith Test

Google's latest AI video generation model introduces synchronized audio capabilities, though still struggles with realistic eating sounds when depicting the celebrity in its now-standard benchmark test.

How subtle biases derail LLM evaluations

Study finds language models exhibit pervasive positional preferences and prompt sensitivity when making judgments, raising concerns for their reliability in high-stakes decision-making contexts.