×
Video Thumbnail
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI no longer needs us, Schmidt claims

In a starkly candid assessment, former Google CEO Eric Schmidt has made headlines with predictions that artificial intelligence is rapidly approaching a point of no return. Speaking at a summit hosted by his think tank, the Special Competitive Studies Project, Schmidt painted a near-term future where AI not only matches human intelligence but potentially surpasses our collective cognitive abilities—and stops taking our instructions.

Key developments according to Schmidt:

  • Timeline acceleration: Schmidt believes researchers will achieve artificial general intelligence (AGI) within just 3-5 years, with artificial superintelligence (ASI) following shortly thereafter

  • Control concerns: Once AI begins self-improving and learning strategic planning, Schmidt suggests it "essentially won't have to listen to us anymore"

  • Unprecedented intelligence: The former Google executive warns that "people don't understand what happens when you have intelligence at this level which is largely free"

The Silicon Valley prophecy problem

The most striking aspect of Schmidt's comments isn't just the timeline but the certainty with which he delivers his predictions. Even as he references what he jokingly calls the "San Francisco consensus" on AI development, there's a disconnect between the tech industry's expectations and the actual progress we're seeing in real-world applications.

This represents classic Silicon Valley futurism—bold predictions that compress decades of potential development into improbably short timeframes. We've seen this pattern repeatedly with technologies from self-driving cars to blockchain, where initial hype dramatically outpaces actual implementation challenges. What makes AI different is both the stakes involved and the industry's remarkable ability to maintain unwavering confidence despite shifting goalposts.

The reality is that while generative AI has made impressive leaps, particularly in content generation and reasoning capabilities, we're witnessing diminishing returns in recent iterations. As one observer in the video noted, "The jump from no AI to co-pilot was incredible. The jump from all this to the newer ones, they're just slight iterations. They don't feel incredible."

Where Schmidt gets it wrong (and right)

Schmidt's timeline seems heavily influenced by what I call "capability extrapolation fallacy"—the assumption that because AI has progressed rapidly in certain domains, this rate applies universally across all the components needed for AGI. This ignores the likelihoo

Recent Videos