New research suggests that AI models lack a full human-level understanding of sensory and physical concepts due to their disembodied nature, despite appearing sophisticated in their language capabilities. This finding has significant implications for AI development, suggesting that multimodal training incorporating sensory information might be crucial for creating systems with more human-like comprehension.
The big picture: Researchers at Ohio State University discovered a fundamental gap between how humans and large language models understand concepts related to physical sensations and bodily interactions.
- The study compared how nearly 4,500 words were conceptualized by humans versus AI models like GPT-4 and Google’s Gemini.
- While AI systems aligned with humans on abstract concepts, they significantly diverged when it came to understanding sensory experiences and physical interactions.
Key details: The research revealed AI models have unusual interpretations of sensory concepts due to their text-only training.
- AI models bizarrely associated experiencing flowers with the torso rather than through sight or smell as humans would naturally do.
- The study evaluated multiple leading AI systems including OpenAI’s GPT-3.5 and GPT-4, as well as Google’s PaLM and Gemini.
What they’re saying: “They just differ so much from humans,” notes lead researcher Qihui Xu, pointing to the limitations of text-based training for understanding sensory concepts.
Promising developments: AI models trained on multiple types of data showed more human-like understanding.
- Models trained on visual information in addition to text demonstrated closer alignment with human word ratings.
- “This tells us the benefits of doing multi-modal training might be larger than we expected. It’s like one plus one actually can be greater than two,” explains Xu.
Why this matters: The findings suggest that embodiment could be crucial for developing more human-like artificial intelligence.
- The research supports the importance of multimodal models and physical embodiment in advancing AI capabilities.
Potential challenges: University of Maryland researcher Philip Feldman warns that giving AI robots physical bodies presents significant safety concerns.
- Robots with mass could cause physical harm if their understanding of physical interactions is flawed.
- Using only soft robots for training could create its own problems, as the AI might incorrectly learn that high-speed collisions have no consequences.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...