×
Wozniak and 1,000+ experts call for AI superintelligence ban
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Apple co-founder Steve Wozniak has joined more than 1,000 public figures in signing a statement calling for an interim ban on the development of AI superintelligence. The coalition includes Nobel laureates, AI pioneers, and other tech luminaries who argue that rushing toward superintelligence poses risks ranging from human economic obsolescence to potential extinction.

What they’re saying: The statement warns that while AI tools may bring benefits, the race to build superintelligence raises serious concerns about human welfare and safety.
• “Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks,” the statement reads.
• The signatories “call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

Who else is involved: The statement brings together an influential coalition of AI researchers, Nobel laureates, and former government officials.
• AI pioneer Geoffrey Hinton, known as “the godfather of deep learning,” signed the statement alongside foundational AI researcher Yoshua Bengio.
• UC Berkeley computer science professor and AI safety expert Stuart Russell also added his name to the list.
• Nobel laureate physicists Frank Wilczek and John C. Mather joined the effort, along with Nobel laureates Beatrice Fihn and Daron Acemoğlu.
• Former U.S. National Security Adviser Susan Rice represents the policy perspective among the signatories.

The big picture: This coordinated statement reflects growing concerns within the AI community about the pace of superintelligence development and its potential consequences.
• The signatories have previously compared artificial general intelligence—AI that can match or exceed human cognitive abilities across all domains—to existential threats like pandemics and nuclear war in terms of its potential impact on human survival.
• The statement aims to create “common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence.”
• The coalition spans multiple disciplines, from computer science and physics to policy and economics, suggesting broad-based concern about current AI development trajectories.

Why this matters: The statement represents a significant public intervention by leading figures in AI research and policy, potentially influencing regulatory discussions and corporate AI strategies as companies race to develop increasingly powerful AI systems.

Apple co-founder Steve Wozniak supports interim ban on AGI

Recent News

Goldman Sachs: AI data centers need new financing as costs soar

Pension funds and insurers are getting wooed with bond-like structures for tech infrastructure.

AI browsers face “bad website paradox” as competition heats up

AI-powered browsers stumble on poorly designed websites despite impressive capabilities elsewhere.

Reddit sues Perplexity for stealing user content to build $20B AI company

The case tests whether AI companies can sidestep licensing deals by using third-party scrapers.