×
Video Thumbnail
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Why Meta’s AI boss is ‘done’ with LLMs – and what might replace them

When someone who’s been at the forefront of AI research for decades says they’re “not so interested in LLMs anymore,” it’s worth paying attention. That’s exactly what Yann LeCun, Meta’s Chief AI Scientist and one of the godfathers of modern AI, declared at Nvidia’s GTC 2025 conference recently.

In a tech landscape where large language models (LLMs) like ChatGPT and Claude dominate headlines and investment dollars, LeCun’s statement might seem surprising. But his reasoning offers a fascinating glimpse into where AI might be headed next.

Four areas that matter more than LLMs

According to LeCun, LLMs are now mostly “in the hands of industry product people” who are making incremental improvements with more data and computing power. Instead, he’s focused on four more fundamental challenges:

  1. Understanding the physical world – How machines can grasp the real, physical environment around them
  2. Persistent memory – How AI systems can maintain and utilize long-term information
  3. Reasoning – Moving beyond the “simplistic” reasoning of current LLMs
  4. Planning – Enabling AI to formulate and execute complex plans

“I’m excited about things that a lot of people in this community might get excited about five years from now,” LeCun explained, suggesting his interests have already moved beyond where most of the industry is focused today.

Why text alone can’t lead to general AI

LeCun highlighted a fundamental limitation of current AI: LLMs are built around predicting discrete tokens (words or word parts), which just isn’t how the physical world works.

While token prediction works well for text, it falls short when dealing with the continuous, high-dimensional nature of the physical world. As he put it

Recent Videos