×
Google’s AI Robots Learn from Videos, Hinting at Transformative Applications
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Google’s AI robots are making significant strides in learning by watching videos, demonstrating the potential for transformative applications of advanced language models like Gemini 1.5 Pro in robotics.

Video training enables robots to navigate and complete tasks: By watching video tours of designated areas, Google’s RT-2 robots equipped with the Gemini 1.5 Pro AI model can learn about their environment and carry out requests at their destination:

  • The AI’s long context window allows it to process extensive information simultaneously, enabling the robot to absorb details from the video to complete tasks based on its learned knowledge.
  • In practical tests within a 9,000-square-foot area, the Gemini-powered robots successfully followed over 50 different user instructions with a 90% success rate.

Multi-step task planning demonstrates advanced understanding: One notable aspect of the Gemini 1.5 Pro model is its ability to plan and execute multi-step tasks, going beyond simple single-step orders:

  • For example, the robots can answer questions like whether a specific drink is available by navigating to a fridge, visually processing the contents, and returning with the answer.
  • This demonstrates a level of understanding and execution that indicates the AI model’s capacity for planning and carrying out sequences of actions.

Potential real-world applications, but limitations remain: While not ready for consumer use, the integration of advanced AI models like Gemini 1.5 Pro into robotics could transform industries such as healthcare, shipping, and janitorial services in the future. However, challenges remain:

  • The robots currently take up to 30 seconds to process each instruction, much slower than a human performing the same task.
  • Real-world environments like homes and offices are far more chaotic and difficult for robots to navigate compared to controlled research settings.

Broader implications for AI-powered robotics: Despite current limitations, Google DeepMind’s research represents a significant leap forward in the field of AI-driven robotics:

  • Teaching robots to learn from videos in a manner reminiscent of human interns showcases the potential for more natural and intuitive human-robot interactions.
  • As AI language models continue to advance, their integration with robotics could unlock transformative applications across industries, automating complex tasks and revolutionizing how robots assist humans in everyday life.
Google's AI robots are learning from watching movies – just like the rest of us

Recent News

Google does much of its coding work with AI now — its shrinking workforce should offer proof

Google's rapid AI integration sees over a quarter of new code generated by artificial intelligence, signaling a transformative shift in the company's operations and product development.

Meta partners with Lumen to drive network expansion and AI adoption

Meta's network expansion with Lumen paves the way for more advanced AI features across its platforms.

Microsoft begins international rollout of Copilot AI features within Office apps

Microsoft's expansion of AI features in Asia-Pacific subscriptions signals a potential shift in how productivity software is packaged and priced globally.