The rapid rise of artificial intelligence has sparked intense debate between those heralding its revolutionary potential and others dismissing it as overhyped, prompting two Princeton scholars to offer a more nuanced perspective in their new book “AI Snake Oil.”
The core argument: Princeton computer science professor Arvind Narayanan and Ph.D. candidate Sayash Kapoor aim to help people distinguish between AI’s genuine capabilities and empty promises.
- Their book “AI Snake Oil” focuses on providing foundational knowledge to separate legitimate AI advances from misleading claims
- The authors argue that while some AI applications show remarkable progress, many marketed AI products make unfounded or impossible claims
- A key example includes AI hiring tools that claim to predict job performance based on brief video interviews, despite lacking scientific evidence
Critical distinctions: The authors emphasize that treating all AI technologies as similar is as problematic as failing to distinguish between bikes, cars, and spaceships.
- Generative AI (like ChatGPT) and predictive AI (used for credit scoring) are fundamentally different technologies requiring separate evaluation
- While generative AI has shown impressive year-over-year improvements, predictive AI still relies largely on decades-old tools
- The authors warn that conflating different AI technologies leads to public confusion and misguided concerns
Real-world impacts: The most consequential AI applications affecting people’s lives today are predictive AI systems making high-stakes decisions.
- These algorithms determine crucial outcomes like bail amounts, hospital stay durations, and hiring decisions
- Insurance coverage may be denied based on AI predictions about recovery time
- The widespread use of similar AI vendors means individuals could face repeated rejections across multiple job applications
Industry influence concerns: The concentration of AI development among major tech companies raises significant issues about the future direction of the technology.
- Large language models can only be developed by the biggest tech labs, giving companies like OpenAI, Google, and Facebook outsized influence
- These same companies are among the biggest spenders on political lobbying
- The centralization of AI development allows industry giants to drive the technical agenda
Positive outlook: Despite concerns, the authors see promising developments in AI’s future.
- They anticipate beneficial AI tools will become seamlessly integrated into knowledge workers’ daily workflows
- Previously “intractable” problems like spellcheck have become routine automated features
- Self-driving technology could potentially help reduce the approximately 1 million annual global auto-related fatalities
Looking ahead: The evolution of AI technology suggests a pattern where controversial cutting-edge applications eventually become normalized, useful tools – much like spellcheck and autocomplete today – while the most concerning current applications may be replaced by more reliable alternatives.
AI Snake Oil': A conversation with Princeton AI experts Arvind Narayanan and Sayash Kapoor