×
Google’s Gemini-Powered Robot Navigates Offices, Follows Complex Commands
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Google DeepMind has unveiled a chatbot-powered robot capable of navigating an office environment and following complex verbal and visual instructions, demonstrating the potential for large language models to enable more intelligent and useful physical machines.

Gemini chatbot upgrade enables advanced robot capabilities: Google DeepMind’s robot leverages the latest version of the company’s Gemini large language model to understand commands and navigate its surroundings:

  • The robot can parse complex verbal instructions like “Find me somewhere to write” and lead a person to an appropriate location, such as a whiteboard.
  • Gemini’s ability to handle video and text input, combined with pre-recorded video tours of the office, allows the robot to reason about its environment and navigate accurately.
  • When given a command like “Where did I leave my coaster?”, the robot proved up to 90% reliable at finding the correct location.

Integrating language models with robotics algorithms: The Google helper robot combines the Gemini language model with an algorithm that generates specific actions for the robot to take in response to commands and its visual input.

  • This integration of natural language processing and robotics enables more intuitive human-robot interaction and greatly improves the robot’s usability.
  • Researchers plan to test the system on different types of robots and believe Gemini will be able to handle even more complex questions that require contextual understanding.

A growing trend in AI-powered robotics research: Google DeepMind’s demonstration is part of a larger movement in both academia and industry to explore how large language models can enhance the capabilities of physical machines.

  • The recent International Conference on Robotics and Automation featured nearly two dozen papers on using vision language models in robotics.
  • Startups like Physical Intelligence and Skild AI have raised significant funding to develop robots with general problem-solving abilities by combining large language models with real-world training.

Analyzing Deeper: While the Google DeepMind robot showcases impressive navigation and reasoning skills, it operates within the controlled environment of an office space. Adapting this technology to more complex and unpredictable real-world settings will likely present additional challenges. Moreover, as language models become increasingly integral to robotics, ensuring the safety, reliability, and transparency of these systems will be crucial. Nonetheless, the rapid advancements in AI-powered robotics hint at a future where intelligent machines can more seamlessly assist and collaborate with humans in various domains.

Google DeepMind's Chatbot-Powered Robot Is Part of a Bigger Revolution

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.