×
Is AI really that close to human-level intelligence?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The continued advancement of artificial intelligence systems, particularly large language models (LLMs), has reignited discussions about the possibility of achieving artificial general intelligence (AGI) – machines capable of performing the full range of human cognitive tasks.

Current state of AI capabilities: OpenAI’s latest model o1 represents a significant advancement in AI technology, showcasing improved reasoning abilities and performance on complex tasks.

  • The model achieved an 83% success rate on International Mathematical Olympiad qualifying exams, compared to its predecessor’s 13%
  • O1 incorporates chain-of-thought (CoT) prompting, allowing it to break down complex problems into manageable steps
  • The system demonstrates broader capabilities than previous AI models, though still falls short of human-like general intelligence

Technical underpinnings: Large language models operate through a sophisticated process of pattern recognition and prediction, powered by transformer architecture.

  • LLMs use “next token prediction” during training, learning to predict masked portions of text
  • Transformer architecture allows models to understand context across long distances in text
  • These systems can process various types of data beyond text, including images and audio

Key limitations: Despite impressive capabilities, current LLMs face several significant constraints.

  • Performance degrades rapidly on planning tasks requiring more than 16-20 steps
  • Models struggle with abstract reasoning and generalizing knowledge to novel situations
  • Available training data is expected to run out between 2026 and 2032
  • Improvements from increasing model size are showing diminishing returns

Expert perspectives: Leading researchers remain divided on the path to AGI.

  • Yoshua Bengio of the University of Montreal emphasizes that crucial components are still missing
  • Google DeepMind’s Raia Hadsell suggests that next-token prediction alone is insufficient for achieving AGI
  • Researchers increasingly point to the need for AI systems to develop “world models” similar to human cognition

Future directions: The path toward AGI likely requires fundamental breakthroughs beyond current LLM capabilities.

  • Development of systems that can generate solutions holistically rather than sequentially
  • Integration of world modeling capabilities to enable better planning and reasoning
  • New architectures that can better handle novel situations and generalize learned knowledge

Looking ahead: While current AI systems demonstrate impressive capabilities in specific domains, true artificial general intelligence remains a significant technical challenge requiring fundamental advances in how AI systems process information and understand the world. The gap between current LLMs and human-level intelligence suggests that achieving AGI will require more than simply scaling existing approaches.

How close is AI to human-level intelligence?

Recent News

Data analytics acceleration solves AI’s hidden bottleneck

Data preparation consumes up to 80% of data scientists' time, creating a hidden bottleneck that threatens AI returns despite industry focus on larger models and faster inference chips.

Selling your face to AI could cost more than you think

Performers face lasting consequences after licensing their faces and voices for AI videos that promote questionable content beyond their control.

The hidden AI threat growing inside tech companies

Leading AI firms could use their models to secretly accelerate their own research, creating unprecedented power imbalances that regulators may be unable to detect or control.