back
Get SIGNAL/NOISE in your inbox daily

Google DeepMind recently showcased its humanoid robot Apollo performing household tasks like folding clothes and sorting items through natural language commands, powered by new AI models Gemini Robotics 1.5 and Gemini Robotics-ER 1.5. While the demonstrations appear impressive, experts caution that we’re still far from achieving truly autonomous household robots, as current systems rely on structured scenarios and extensive training data rather than genuine thinking capabilities.

What you should know: The demonstration featured Apptronik’s Apollo robot completing multi-step tasks using vision-language action models that convert visual information and instructions into motor commands.

  • Gemini Robotics 1.5 works by “turning visual information and instructions into motor commands,” while Gemini Robotics-ER 1.5 “specializes in understanding physical spaces, planning, and making logistical decisions within its surroundings.”
  • The robots responded to natural language commands to fold laundry, sort recycling, and pack items into bags.

Why this matters: The integration of large language models with robotic systems represents a significant step toward more intuitive human-robot interaction, but fundamental limitations remain for real-world deployment.

  • Current systems work well in controlled environments with abundant training data, but struggle with the unpredictability of actual household settings.
  • The technology addresses a long-standing goal in robotics: creating general-purpose robots that can perform routine tasks through simple verbal instructions.

The reality check: Ravinder Dahiya, a Northeastern University professor of electrical and computer engineering, emphasizes that despite impressive demonstrations, these robots aren’t actually “thinking” independently.

  • “It becomes easy to iterate visual and language models in this case because there is a good amount of data,” Dahiya explains, noting that vision AI has existed for years.
  • The robots operate on “a very defined set of rules” backed by “heaps of high-quality training data and structured scenario planning and algorithms.”

Missing capabilities: Current humanoid robots lack crucial sensing abilities that humans take for granted, limiting their effectiveness in complex environments.

  • Unlike vision data, there’s insufficient training data for tactile feedback, which is essential for manipulating both soft and hard objects.
  • Robots still cannot register pain, smell, or other sensory inputs that would be necessary for uncertain environments.
  • “For uncertain environments, you need to rely on all sensor modalities, not just vision,” Dahiya notes.

What’s next: Researchers like Dahiya are developing advanced sensing technologies, including electronic robot skins, to give robots more human-like capabilities.

  • These developments aim to provide robots with touch and tactile feedback, though progress remains slow due to limited training data.
  • The path to truly autonomous household robots will require breakthroughs across multiple sensing modalities beyond just vision and language processing.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...