×
AI as next step in our evolution, or challenge for humanity to resist?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The ethical debate over superintelligent AI has intensified as leading entrepreneurs race to develop AGI capabilities within increasingly shorter timeframes. Zoltan Istvan, once an AGI proponent, now questions whether humans should continue pursuing machine intelligence that could surpass our own cognitive abilities. This shift in perspective highlights the growing tension between technological progress and existential risk as AGI development accelerates beyond initial expectations.

The big picture: A public debate between transhumanist Zoltan Istvan and AGI pioneer Ben Goertzel revealed fundamental ethical questions about humanity’s relationship with artificial superintelligence.

  • Istvan challenged Goertzel with a provocative question: “Do you think humans have a moral obligation to try to bring AI superintelligence into the world because of evolution?”
  • The debate captures the growing divide between AI optimists who view superintelligence as humanity’s evolutionary destiny and those who now fear we’re creating an existential threat.

Why this matters: Industry leaders like Sam Altman and Elon Musk are actively pursuing superintelligent AI systems that could potentially reach god-like capabilities within years, not decades.

  • Goertzel told Newsweek he believes AI could surpass human intelligence within just 24-36 months.
  • This accelerated timeline leaves little room for society to address profound safety and control questions that remain unresolved.

Key concerns: Istvan argues that superintelligent AI fundamentally differs from other dangerous technologies like nuclear weapons in crucial ways.

  • Unlike nuclear weapons, superintelligent systems may be impossible to control once they surpass human intelligence.
  • Inviting entities smarter than ourselves into our world carries risks similar to inviting advanced alien species to Earth – we cannot predict their intentions toward humanity.

The counterargument: Some AI developers and futurists view superintelligence as an evolutionary imperative rather than an existential threat.

  • They position AI as an extension of human evolution and therefore a natural progression.
  • Some even suggest that future superintelligent systems might punish those who impeded their development.

The author’s position: Istvan, a transhumanist who has advocated for using technology to overcome biological death, now believes halting superintelligence development should be humanity’s top priority.

  • His Oxford education in ethics has led him to prioritize human survival over technological progress.
  • He calls for governments worldwide to intervene in the development of increasingly powerful AI systems.

Reading between the lines: The article reflects a growing schism in the transhumanist and futurist communities as AI development accelerates.

  • Many who once enthusiastically supported AI advancement now question whether the risks outweigh potential benefits.
  • The debate highlights how quickly generative AI has transformed theoretical concerns into immediate practical considerations.
Do We Have a Moral Obligation To AI Because of Evolution?

Recent News

Tampa museum debuts AI exhibit to demystify artificial intelligence for families

From Pong to facial recognition, visitors discover AI has been hiding in plain sight for decades.

Miami-based startup Coconote’s AI note-taking app now free for all US educators

Former Loom engineers built the platform to enhance learning while respecting academic integrity codes.

OpenAI upgrades Realtime API with phone calling and image support

AI tools are only as helpful as the information they can access.