×
Google’s Gemini-Powered Robot Navigates Offices, Follows Complex Commands
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Google DeepMind has unveiled a chatbot-powered robot capable of navigating an office environment and following complex verbal and visual instructions, demonstrating the potential for large language models to enable more intelligent and useful physical machines.

Gemini chatbot upgrade enables advanced robot capabilities: Google DeepMind’s robot leverages the latest version of the company’s Gemini large language model to understand commands and navigate its surroundings:

  • The robot can parse complex verbal instructions like “Find me somewhere to write” and lead a person to an appropriate location, such as a whiteboard.
  • Gemini’s ability to handle video and text input, combined with pre-recorded video tours of the office, allows the robot to reason about its environment and navigate accurately.
  • When given a command like “Where did I leave my coaster?”, the robot proved up to 90% reliable at finding the correct location.

Integrating language models with robotics algorithms: The Google helper robot combines the Gemini language model with an algorithm that generates specific actions for the robot to take in response to commands and its visual input.

  • This integration of natural language processing and robotics enables more intuitive human-robot interaction and greatly improves the robot’s usability.
  • Researchers plan to test the system on different types of robots and believe Gemini will be able to handle even more complex questions that require contextual understanding.

A growing trend in AI-powered robotics research: Google DeepMind’s demonstration is part of a larger movement in both academia and industry to explore how large language models can enhance the capabilities of physical machines.

  • The recent International Conference on Robotics and Automation featured nearly two dozen papers on using vision language models in robotics.
  • Startups like Physical Intelligence and Skild AI have raised significant funding to develop robots with general problem-solving abilities by combining large language models with real-world training.

Analyzing Deeper: While the Google DeepMind robot showcases impressive navigation and reasoning skills, it operates within the controlled environment of an office space. Adapting this technology to more complex and unpredictable real-world settings will likely present additional challenges. Moreover, as language models become increasingly integral to robotics, ensuring the safety, reliability, and transparency of these systems will be crucial. Nonetheless, the rapid advancements in AI-powered robotics hint at a future where intelligent machines can more seamlessly assist and collaborate with humans in various domains.

Google DeepMind's Chatbot-Powered Robot Is Part of a Bigger Revolution

Recent News

AI agents and the rise of Hybrid Organizations

Meta makes its improved AI image generator free to use while adding visible watermarks and daily limits to prevent misuse.

Adobe partnership brings AI creativity tools to Box’s content management platform

Box users can now access Adobe's AI-powered editing tools directly within their secure storage environment, eliminating the need to download files or switch between platforms.

Nvidia’s new ACE platform aims to bring more AI to games, but not everyone’s sold

Gaming companies are racing to integrate AI features into mainstream titles, but high hardware requirements and artificial interactions may limit near-term adoption.