×
New research suggests ‘non-verbal reasoning’ will make AI models more powerful
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The development of non-verbal reasoning capabilities in Large Language Models (LLMs) represents a significant shift in how artificial intelligence systems process complex logical problems, moving beyond pure language-based computation.

Key innovation: COCONUT (Chain Of CONtinUous Thought) introduces a novel approach to AI reasoning by processing information in the “latent space” – the hidden computational layer where neural networks perform calculations before generating human-readable output.

  • This model allows for multiple logical paths to be evaluated simultaneously, similar to how a computer might perform a breadth-first search algorithm
  • Rather than converting every thought to and from natural language, COCONUT maintains information in its computational form throughout the reasoning process
  • The “latent thoughts” approach helps prevent the model from getting stuck on incorrect logical paths or fabricating false rules

Performance insights: COCONUT demonstrates particular strength in handling complex logical problems with multiple conditions, though its advantages are less pronounced in basic mathematical or general reasoning tasks.

  • The model shows improved efficiency when dealing with intricate logical puzzles that require considering multiple variables
  • Traditional language-based reasoning approaches often struggle with similar complex scenarios due to the limitations of expressing every logical step in natural language
  • The system’s performance suggests that continuous processing might better mirror how human brains handle abstract reasoning tasks

Technical implementation: The research explores the neural network level of language model operation, revealing new possibilities for how AI systems can process information.

  • By maintaining thoughts in latent space, the model can manipulate abstract concepts more efficiently than traditional language-based approaches
  • The computational architecture allows for parallel processing of multiple logical pathways
  • This approach potentially reduces the computational overhead associated with constant language conversion

Future implications: The research suggests that training models with “continuous thoughts” could lead to more versatile and capable AI systems that can better generalize across different types of reasoning challenges.

  • Models pre-trained with this approach might handle a broader range of logical problems more effectively
  • The findings could influence the development of future AI architectures that combine language and non-verbal processing capabilities
  • This hybrid approach might better mirror human cognitive processes, which often involve both verbal and non-verbal reasoning

Looking beyond language: While COCONUT’s capabilities advance our understanding of AI reasoning, questions remain about how closely this computational approach mirrors human cognitive processes and whether it can be effectively scaled to handle more complex real-world reasoning scenarios.

Are LLMs capable of non-verbal reasoning?

Recent News

15 prompting tips to boost your AI productivity in 2025

Businesses are discovering that precise, context-rich prompts help AI tools deliver more practical and actionable solutions for daily workflows.

Notion vs. NotebookLM: Which AI note-taker reigns supreme?

Google's NotebookLM and Notion take contrasting approaches to AI-powered productivity, with the former focusing on deep document analysis while the latter offers broader workspace management capabilities.

Doctolib’s AI agents streamline healthcare support without sacrificing security

Doctolib's new AI butler Alfred streamlines healthcare support by orchestrating specialized AI agents to handle thousands of daily customer inquiries while maintaining strict security protocols.