×
Gary Marcus: How Outliers Expose the AI Industry’s Fragile Future
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The rapid rise and potential fall of the current AI industry can be largely explained by one crucial fact: AI struggles with outliers, leading to absurd outputs when faced with unusual situations.

The outlier problem: Current machine learning approaches, which underpin most of today’s AI, perform poorly when encountering circumstances that deviate from their training examples:

  • A Carnegie Mellon computer scientist, Phil Koopman, illustrates this issue using the example of a driverless car accident involving an overturned double trailer, which the AI system failed to recognize due to its unfamiliarity with the situation.
  • This limitation, also known as the problem of distribution shift, has been a persistent challenge for neural networks since the 1990s, as highlighted in papers co-authored by Steven Pinker and the article’s author.

An industry built on false promises: Many individuals and companies have achieved wealth and fame by downplaying or ignoring the outlier problem in AI:

  • They have created expectations that current AI techniques cannot fulfill, as they are bound to fail when faced with situations that are significantly different from their training data.
  • Gary Marcus predicts an imminent “bubble deflation” in the AI industry, as more people recognize the limitations of generative AI (GenAI) in handling outliers.

Implications for Artificial General Intelligence (AGI): Understanding the outlier problem is crucial for assessing the feasibility of achieving AGI in the near future:

  • Those who grasp the severity of the outlier issue in current neural networks will realize that claims about the imminence of AGI by prominent figures like Sam Altman, Elon Musk, and Ray Kurzweil are unrealistic and comparable to “imagining that really tall ladders will soon make it to the moon.”
  • Without a general solution to the outlier problem, the notion of being close to achieving AGI is untenable.

The divide in AI understanding: Marcus suggests that there is a clear split in the understanding of AI among individuals:

  • Those who comprehend the significance of the outlier problem and its implications for the current AI industry and the pursuit of AGI.
  • Those who do not grasp this issue and continue to make grandiose claims about the capabilities and potential of AI.

Broader implications: The outlier problem in AI has far-reaching consequences that extend beyond the current hype and inflated expectations surrounding the technology. As the limitations of AI in handling unusual situations become more apparent, it is likely to lead to a significant reshaping of the AI industry and a reevaluation of the timelines for achieving AGI. This realization may prompt a more measured and realistic approach to AI development, focusing on addressing the fundamental challenges, such as the outlier problem, before making bold claims about the technology’s potential. Furthermore, this understanding should inform policy discussions and public discourse surrounding AI, ensuring that decisions are based on a clear grasp of the technology’s current limitations and the work that still needs to be done to overcome them.

This one important fact about current AI explains almost everything

Recent News

Baidu reports steepest revenue drop in 2 years amid slowdown

China's tech giant Baidu saw revenue drop 3% despite major AI investments, signaling broader challenges for the nation's technology sector amid economic headwinds.

How to manage risk in the age of AI

A conversation with Palo Alto Networks CEO about his approach to innovation as new technologies and risks emerge.

How to balance bold, responsible and successful AI deployment

Major companies are establishing AI governance structures and training programs while racing to deploy generative AI for competitive advantage.