×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Google’s AI robots are making significant strides in learning by watching videos, demonstrating the potential for transformative applications of advanced language models like Gemini 1.5 Pro in robotics.

Video training enables robots to navigate and complete tasks: By watching video tours of designated areas, Google’s RT-2 robots equipped with the Gemini 1.5 Pro AI model can learn about their environment and carry out requests at their destination:

  • The AI’s long context window allows it to process extensive information simultaneously, enabling the robot to absorb details from the video to complete tasks based on its learned knowledge.
  • In practical tests within a 9,000-square-foot area, the Gemini-powered robots successfully followed over 50 different user instructions with a 90% success rate.

Multi-step task planning demonstrates advanced understanding: One notable aspect of the Gemini 1.5 Pro model is its ability to plan and execute multi-step tasks, going beyond simple single-step orders:

  • For example, the robots can answer questions like whether a specific drink is available by navigating to a fridge, visually processing the contents, and returning with the answer.
  • This demonstrates a level of understanding and execution that indicates the AI model’s capacity for planning and carrying out sequences of actions.

Potential real-world applications, but limitations remain: While not ready for consumer use, the integration of advanced AI models like Gemini 1.5 Pro into robotics could transform industries such as healthcare, shipping, and janitorial services in the future. However, challenges remain:

  • The robots currently take up to 30 seconds to process each instruction, much slower than a human performing the same task.
  • Real-world environments like homes and offices are far more chaotic and difficult for robots to navigate compared to controlled research settings.

Broader implications for AI-powered robotics: Despite current limitations, Google DeepMind’s research represents a significant leap forward in the field of AI-driven robotics:

  • Teaching robots to learn from videos in a manner reminiscent of human interns showcases the potential for more natural and intuitive human-robot interactions.
  • As AI language models continue to advance, their integration with robotics could unlock transformative applications across industries, automating complex tasks and revolutionizing how robots assist humans in everyday life.
Google's AI robots are learning from watching movies – just like the rest of us

Recent News

New YouTube Feature Lets You AI-Generate Thumbnails for Playlists

The new feature automates playlist thumbnail creation while limiting user customization options to preset AI-generated themes.

This AI-Powered Social Network Eliminates Human Interaction

A new Twitter-like platform replaces human interactions with AI chatbots, aiming to reduce social media anxiety.

Library of Congress Is a Go-To Data Source for Companies Training AI Models

The Library's vast digital archives attract AI companies seeking diverse, copyright-free data to train language models.