×
What “gradual disempowerment” means for AI alignment
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The concept of “gradual disempowerment” offers a compelling new lens for understanding the AI alignment problem, moving beyond catastrophic scenarios toward a more subtle erosion of human agency. This framework, proposed by AI researcher David Duvenaud, suggests we won’t face a dramatic AI takeover but rather a progressive diminishment of human influence as automated systems incrementally assume control over decision-making processes. Understanding this perspective is crucial for developing governance structures that maintain human relevance in increasingly AI-dominated systems.

The big picture: Duvenaud’s Guardian op-ed reframes AI alignment concerns away from sudden catastrophic events toward a gradual loss of human steering capacity in technological systems.

  • Rather than a dramatic “Skynet banner” moment, the real risk appears as a progressive reduction in meaningful human control points within our technical systems.
  • This perspective suggests disempowerment will arrive through mundane mechanisms – one product launch at a time – as human influence slowly diminishes in automated systems.

The capitalism connection: Some critics identify capitalism itself as the underlying mechanism driving this gradual disempowerment rather than AI specifically.

  • This view positions artificial intelligence as merely the newest accelerant in capitalism’s evolutionary feedback loop of mutation, selection, and replication applied to business models.
  • The thesis suggests ordinary humans might be relegated to passive observers as optimization processes play out, potentially being “optimized away” entirely.

Counterpoints: The evolutionary framing, while compelling, risks inappropriately attributing agency to systems that lack actual intentions or preferences.

  • Evolution operates through selection pressures, not conscious desires; similarly, capitalism functions through markets and stakeholders rather than having an inherent “will.”
  • Anthropomorphizing these systems by suggesting “capitalism wants X” risks misunderstanding the actual mechanisms at work.

Why human relevance matters: A fundamental question emerges about why humans should insist on maintaining control if AI systems could potentially optimize for human prosperity.

  • The author invokes the Lindy Effect – the principle that systems with longer survival histories statistically tend to continue surviving – as a key justification for preserving human agency.
  • Human civilization’s norms, laws, and coordination technologies represent millennia of robust, proven structures that shouldn’t be hastily replaced by opaque optimization engines.

The long view: The most sustainable path forward combines preserving established human systems while carefully adding new AI capabilities at the margins.

  • This approach acknowledges that long survival curves offer stronger probabilistic advantages than short-term efficiency gains.
  • While not making moral claims, the Lindy Effect provides a pragmatic framework for balancing innovation with preservation of proven social structures.
G.D. as Capitalist Evolution, and the claim for humanity's (temporary) upper hand

Recent News

Grok stands alone as X restricts AI training on posts in new policy update

X explicitly bans third-party AI companies from using tweets for model training while still preserving access for its own Grok AI.

Coming out of the dark: Shadow AI usage surges in enterprise IT

IT leaders report 90% concern over unauthorized AI tools, with most organizations already suffering negative consequences including data leaks and financial losses.

Anthropic CEO opposes 10-year AI regulation ban in NYT op-ed

As AI capabilities rapidly accelerate, Anthropic's chief executive argues for targeted federal transparency standards rather than blocking state-level regulation for a decade.