×
AI as next step in our evolution, or challenge for humanity to resist?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The ethical debate over superintelligent AI has intensified as leading entrepreneurs race to develop AGI capabilities within increasingly shorter timeframes. Zoltan Istvan, once an AGI proponent, now questions whether humans should continue pursuing machine intelligence that could surpass our own cognitive abilities. This shift in perspective highlights the growing tension between technological progress and existential risk as AGI development accelerates beyond initial expectations.

The big picture: A public debate between transhumanist Zoltan Istvan and AGI pioneer Ben Goertzel revealed fundamental ethical questions about humanity’s relationship with artificial superintelligence.

  • Istvan challenged Goertzel with a provocative question: “Do you think humans have a moral obligation to try to bring AI superintelligence into the world because of evolution?”
  • The debate captures the growing divide between AI optimists who view superintelligence as humanity’s evolutionary destiny and those who now fear we’re creating an existential threat.

Why this matters: Industry leaders like Sam Altman and Elon Musk are actively pursuing superintelligent AI systems that could potentially reach god-like capabilities within years, not decades.

  • Goertzel told Newsweek he believes AI could surpass human intelligence within just 24-36 months.
  • This accelerated timeline leaves little room for society to address profound safety and control questions that remain unresolved.

Key concerns: Istvan argues that superintelligent AI fundamentally differs from other dangerous technologies like nuclear weapons in crucial ways.

  • Unlike nuclear weapons, superintelligent systems may be impossible to control once they surpass human intelligence.
  • Inviting entities smarter than ourselves into our world carries risks similar to inviting advanced alien species to Earth – we cannot predict their intentions toward humanity.

The counterargument: Some AI developers and futurists view superintelligence as an evolutionary imperative rather than an existential threat.

  • They position AI as an extension of human evolution and therefore a natural progression.
  • Some even suggest that future superintelligent systems might punish those who impeded their development.

The author’s position: Istvan, a transhumanist who has advocated for using technology to overcome biological death, now believes halting superintelligence development should be humanity’s top priority.

  • His Oxford education in ethics has led him to prioritize human survival over technological progress.
  • He calls for governments worldwide to intervene in the development of increasingly powerful AI systems.

Reading between the lines: The article reflects a growing schism in the transhumanist and futurist communities as AI development accelerates.

  • Many who once enthusiastically supported AI advancement now question whether the risks outweigh potential benefits.
  • The debate highlights how quickly generative AI has transformed theoretical concerns into immediate practical considerations.
Do We Have a Moral Obligation To AI Because of Evolution?

Recent News

Python agents in 70 lines: Building with MCP

Python developers can now build AI agents in about 70 lines of code using Hugging Face's MCP framework, which standardizes how language models connect with external tools without requiring custom integrations for each capability.

AI inflates gas turbine demand, GE Vernova exec reveals

Data center AI needs represent only a fraction of GE Vernova's gas turbine demand, with broader electrification across multiple sectors driving the company's 29 gigawatt backlog.

AI Will Smith Eating Spaghetti 2: Impresario of Disgust

Realistic eating sounds mark the evolution from basic AI video generation to unsettlingly lifelike audio-visual content creation.