×
Google’s AI Robots Learn from Videos, Hinting at Transformative Applications
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Google’s AI robots are making significant strides in learning by watching videos, demonstrating the potential for transformative applications of advanced language models like Gemini 1.5 Pro in robotics.

Video training enables robots to navigate and complete tasks: By watching video tours of designated areas, Google’s RT-2 robots equipped with the Gemini 1.5 Pro AI model can learn about their environment and carry out requests at their destination:

  • The AI’s long context window allows it to process extensive information simultaneously, enabling the robot to absorb details from the video to complete tasks based on its learned knowledge.
  • In practical tests within a 9,000-square-foot area, the Gemini-powered robots successfully followed over 50 different user instructions with a 90% success rate.

Multi-step task planning demonstrates advanced understanding: One notable aspect of the Gemini 1.5 Pro model is its ability to plan and execute multi-step tasks, going beyond simple single-step orders:

  • For example, the robots can answer questions like whether a specific drink is available by navigating to a fridge, visually processing the contents, and returning with the answer.
  • This demonstrates a level of understanding and execution that indicates the AI model’s capacity for planning and carrying out sequences of actions.

Potential real-world applications, but limitations remain: While not ready for consumer use, the integration of advanced AI models like Gemini 1.5 Pro into robotics could transform industries such as healthcare, shipping, and janitorial services in the future. However, challenges remain:

  • The robots currently take up to 30 seconds to process each instruction, much slower than a human performing the same task.
  • Real-world environments like homes and offices are far more chaotic and difficult for robots to navigate compared to controlled research settings.

Broader implications for AI-powered robotics: Despite current limitations, Google DeepMind’s research represents a significant leap forward in the field of AI-driven robotics:

  • Teaching robots to learn from videos in a manner reminiscent of human interns showcases the potential for more natural and intuitive human-robot interactions.
  • As AI language models continue to advance, their integration with robotics could unlock transformative applications across industries, automating complex tasks and revolutionizing how robots assist humans in everyday life.
Google's AI robots are learning from watching movies – just like the rest of us

Recent News

MIT research evaluates driver behavior to advance autonomous driving tech

Researchers find driver trust and behavior patterns are more critical to autonomous vehicle adoption than technical capabilities, with acceptance levels showing first uptick in years.

Inside Microsoft’s plan to ensure every business has an AI Agent

Microsoft's shift toward AI assistants marks its largest interface change since the introduction of Windows, as the company integrates automated helpers across its entire software ecosystem.

Chinese AI model LLaVA-o1 rivals OpenAI’s o1 in new study

New open-source AI model from China matches Silicon Valley's best at visual reasoning tasks while making its code freely available to researchers.