AI models face significant learning limitations: Recent research reveals that deep learning AI models, including large language models, struggle to incorporate new information without complete retraining.
- A study published in Nature by scientists from the University of Alberta highlights a major flaw in AI models’ ability to learn continuously.
- Deep learning AI models, which find patterns in vast amounts of data, fail to function effectively in “continual learning settings” where new concepts are introduced to existing training.
- Attempting to teach an existing deep learning model new information often requires retraining it entirely from scratch.
The plasticity problem: When new information is introduced to existing AI models, their artificial neurons can lose their ability to learn, significantly impacting their functionality.
- Without complete retraining, artificial neurons in AI models can sink to a value of zero, resulting in a loss of “plasticity” or the ability to learn.
- Lead study author Shibhansh Dohare likens this to 90 percent of neurons in a human brain becoming dead, leaving insufficient capacity for learning.
- This plasticity loss creates a significant barrier between current AI models and the concept of artificial general intelligence (AGI), which aims to match human-level intelligence.
Financial implications for AI companies: The necessity for complete retraining presents a substantial financial obstacle for AI companies.
- Training advanced AI models is an expensive and resource-intensive process.
- For large language models using substantial portions of internet data, each retraining could cost millions of dollars in computation.
- This financial burden adds to the already high operational costs of AI companies.
Potential solutions and ongoing challenges: Researchers are exploring ways to address the plasticity problem, but a comprehensive solution remains elusive.
- The study authors developed an algorithm capable of randomly reviving certain damaged or “dead” AI neurons, showing some success in countering the plasticity issue.
- However, a practical and comprehensive solution for continual learning in AI models is still out of reach.
- Dohare emphasizes that solving the continual learning problem could significantly reduce the cost of training these models, making it a “billion-dollar question” for the AI industry.
Broader implications for AI development: The plasticity problem highlights the fundamental differences between artificial and human intelligence, raising questions about the future of AI advancement.
- This limitation underscores the challenges in creating truly adaptable and continuously learning AI systems.
- It also emphasizes the need for innovative approaches to AI model development that can more closely mimic human learning processes.
- As the AI industry continues to evolve, addressing this limitation could be crucial for the development of more advanced and cost-effective AI technologies.
There’s a Humongous Problem With AI Models: They Need to Be Entirely Rebuilt Every Time They’re Updated