×
Push, pull, sniff: AI perception research advances beyond sight to touch and smell
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

New research suggests that AI models lack a full human-level understanding of sensory and physical concepts due to their disembodied nature, despite appearing sophisticated in their language capabilities. This finding has significant implications for AI development, suggesting that multimodal training incorporating sensory information might be crucial for creating systems with more human-like comprehension.

The big picture: Researchers at Ohio State University discovered a fundamental gap between how humans and large language models understand concepts related to physical sensations and bodily interactions.

  • The study compared how nearly 4,500 words were conceptualized by humans versus AI models like GPT-4 and Google’s Gemini.
  • While AI systems aligned with humans on abstract concepts, they significantly diverged when it came to understanding sensory experiences and physical interactions.

Key details: The research revealed AI models have unusual interpretations of sensory concepts due to their text-only training.

  • AI models bizarrely associated experiencing flowers with the torso rather than through sight or smell as humans would naturally do.
  • The study evaluated multiple leading AI systems including OpenAI’s GPT-3.5 and GPT-4, as well as Google’s PaLM and Gemini.

What they’re saying: “They just differ so much from humans,” notes lead researcher Qihui Xu, pointing to the limitations of text-based training for understanding sensory concepts.

Promising developments: AI models trained on multiple types of data showed more human-like understanding.

  • Models trained on visual information in addition to text demonstrated closer alignment with human word ratings.
  • “This tells us the benefits of doing multi-modal training might be larger than we expected. It’s like one plus one actually can be greater than two,” explains Xu.

Why this matters: The findings suggest that embodiment could be crucial for developing more human-like artificial intelligence.

  • The research supports the importance of multimodal models and physical embodiment in advancing AI capabilities.

Potential challenges: University of Maryland researcher Philip Feldman warns that giving AI robots physical bodies presents significant safety concerns.

  • Robots with mass could cause physical harm if their understanding of physical interactions is flawed.
  • Using only soft robots for training could create its own problems, as the AI might incorrectly learn that high-speed collisions have no consequences.
Can AI understand a flower without being able to touch or smell?

Recent News

Study reveals 4 ways AI is transforming sexual wellness

AI-powered tools offer relationship advice rated more empathetic than human responses.

In the Money: Google tests interactive finance charts in AI Mode for stock comparisons

Finance queries serve as Google's testing ground for broader data visualization across other subjects.

30 mathematicians met in secret to stump OpenAI. They (mostly) failed.

Mathematicians may soon shift from solving problems to collaborating with AI reasoning bots.