When someone who’s been at the forefront of AI research for decades says they’re “not so interested in LLMs anymore,” it’s worth paying attention. That’s exactly what Yann LeCun, Meta’s Chief AI Scientist and one of the godfathers of modern AI, declared at Nvidia’s GTC 2025 conference recently.
In a tech landscape where large language models (LLMs) like ChatGPT and Claude dominate headlines and investment dollars, LeCun’s statement might seem surprising. But his reasoning offers a fascinating glimpse into where AI might be headed next.
According to LeCun, LLMs are now mostly “in the hands of industry product people” who are making incremental improvements with more data and computing power. Instead, he’s focused on four more fundamental challenges:
“I’m excited about things that a lot of people in this community might get excited about five years from now,” LeCun explained, suggesting his interests have already moved beyond where most of the industry is focused today.
LeCun highlighted a fundamental limitation of current AI: LLMs are built around predicting discrete tokens (words or word parts), which just isn’t how the physical world works.
While token prediction works well for text, it falls short when dealing with the continuous, high-dimensional nature of the physical world. As he put it