×
New research suggests ‘non-verbal reasoning’ will make AI models more powerful
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The development of non-verbal reasoning capabilities in Large Language Models (LLMs) represents a significant shift in how artificial intelligence systems process complex logical problems, moving beyond pure language-based computation.

Key innovation: COCONUT (Chain Of CONtinUous Thought) introduces a novel approach to AI reasoning by processing information in the “latent space” – the hidden computational layer where neural networks perform calculations before generating human-readable output.

  • This model allows for multiple logical paths to be evaluated simultaneously, similar to how a computer might perform a breadth-first search algorithm
  • Rather than converting every thought to and from natural language, COCONUT maintains information in its computational form throughout the reasoning process
  • The “latent thoughts” approach helps prevent the model from getting stuck on incorrect logical paths or fabricating false rules

Performance insights: COCONUT demonstrates particular strength in handling complex logical problems with multiple conditions, though its advantages are less pronounced in basic mathematical or general reasoning tasks.

  • The model shows improved efficiency when dealing with intricate logical puzzles that require considering multiple variables
  • Traditional language-based reasoning approaches often struggle with similar complex scenarios due to the limitations of expressing every logical step in natural language
  • The system’s performance suggests that continuous processing might better mirror how human brains handle abstract reasoning tasks

Technical implementation: The research explores the neural network level of language model operation, revealing new possibilities for how AI systems can process information.

  • By maintaining thoughts in latent space, the model can manipulate abstract concepts more efficiently than traditional language-based approaches
  • The computational architecture allows for parallel processing of multiple logical pathways
  • This approach potentially reduces the computational overhead associated with constant language conversion

Future implications: The research suggests that training models with “continuous thoughts” could lead to more versatile and capable AI systems that can better generalize across different types of reasoning challenges.

  • Models pre-trained with this approach might handle a broader range of logical problems more effectively
  • The findings could influence the development of future AI architectures that combine language and non-verbal processing capabilities
  • This hybrid approach might better mirror human cognitive processes, which often involve both verbal and non-verbal reasoning

Looking beyond language: While COCONUT’s capabilities advance our understanding of AI reasoning, questions remain about how closely this computational approach mirrors human cognitive processes and whether it can be effectively scaled to handle more complex real-world reasoning scenarios.

Are LLMs capable of non-verbal reasoning?

Recent News

Mobile users say AI features add ‘little to no value’ in new survey

Despite major investments in smartphone AI features by Apple and Samsung, most users find them unnecessary and rarely use them in daily life.

Universities go beyond AI detection to fundamentally rethinking programs

Universities are moving beyond plagiarism detection to redesign assessments and teaching methods as AI tools become commonplace in student work.

MIT’s NEET program bridges disciplines for 21st-century engineers

MIT's cross-disciplinary engineering program enables students to simultaneously master multiple technical fields like AI and biotech, matching the demands of modern tech companies.