×
Wozniak and 1,000+ experts call for AI superintelligence ban
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Apple co-founder Steve Wozniak has joined more than 1,000 public figures in signing a statement calling for an interim ban on the development of AI superintelligence. The coalition includes Nobel laureates, AI pioneers, and other tech luminaries who argue that rushing toward superintelligence poses risks ranging from human economic obsolescence to potential extinction.

What they’re saying: The statement warns that while AI tools may bring benefits, the race to build superintelligence raises serious concerns about human welfare and safety.
• “Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks,” the statement reads.
• The signatories “call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

Who else is involved: The statement brings together an influential coalition of AI researchers, Nobel laureates, and former government officials.
• AI pioneer Geoffrey Hinton, known as “the godfather of deep learning,” signed the statement alongside foundational AI researcher Yoshua Bengio.
• UC Berkeley computer science professor and AI safety expert Stuart Russell also added his name to the list.
• Nobel laureate physicists Frank Wilczek and John C. Mather joined the effort, along with Nobel laureates Beatrice Fihn and Daron Acemoğlu.
• Former U.S. National Security Adviser Susan Rice represents the policy perspective among the signatories.

The big picture: This coordinated statement reflects growing concerns within the AI community about the pace of superintelligence development and its potential consequences.
• The signatories have previously compared artificial general intelligence—AI that can match or exceed human cognitive abilities across all domains—to existential threats like pandemics and nuclear war in terms of its potential impact on human survival.
• The statement aims to create “common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence.”
• The coalition spans multiple disciplines, from computer science and physics to policy and economics, suggesting broad-based concern about current AI development trajectories.

Why this matters: The statement represents a significant public intervention by leading figures in AI research and policy, potentially influencing regulatory discussions and corporate AI strategies as companies race to develop increasingly powerful AI systems.

Apple co-founder Steve Wozniak supports interim ban on AGI

Recent News

IBM’s AI business hits $9.5B as mainframe sales jump 17%

Banks drive demand for AI-ready mainframes that maintain strict data residency requirements.

Meta cuts 600 AI jobs while ramping up hiring in race against rivals

Fewer conversations will speed up decision-making and boost individual impact.

OpenAI security chief warns ChatGPT Atlas browser vulnerable to hackers

Hackers can hide malicious instructions on websites that trick AI into following their commands.