×
Google’s AI Robots Learn from Videos, Hinting at Transformative Applications
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Google’s AI robots are making significant strides in learning by watching videos, demonstrating the potential for transformative applications of advanced language models like Gemini 1.5 Pro in robotics.

Video training enables robots to navigate and complete tasks: By watching video tours of designated areas, Google’s RT-2 robots equipped with the Gemini 1.5 Pro AI model can learn about their environment and carry out requests at their destination:

  • The AI’s long context window allows it to process extensive information simultaneously, enabling the robot to absorb details from the video to complete tasks based on its learned knowledge.
  • In practical tests within a 9,000-square-foot area, the Gemini-powered robots successfully followed over 50 different user instructions with a 90% success rate.

Multi-step task planning demonstrates advanced understanding: One notable aspect of the Gemini 1.5 Pro model is its ability to plan and execute multi-step tasks, going beyond simple single-step orders:

  • For example, the robots can answer questions like whether a specific drink is available by navigating to a fridge, visually processing the contents, and returning with the answer.
  • This demonstrates a level of understanding and execution that indicates the AI model’s capacity for planning and carrying out sequences of actions.

Potential real-world applications, but limitations remain: While not ready for consumer use, the integration of advanced AI models like Gemini 1.5 Pro into robotics could transform industries such as healthcare, shipping, and janitorial services in the future. However, challenges remain:

  • The robots currently take up to 30 seconds to process each instruction, much slower than a human performing the same task.
  • Real-world environments like homes and offices are far more chaotic and difficult for robots to navigate compared to controlled research settings.

Broader implications for AI-powered robotics: Despite current limitations, Google DeepMind’s research represents a significant leap forward in the field of AI-driven robotics:

  • Teaching robots to learn from videos in a manner reminiscent of human interns showcases the potential for more natural and intuitive human-robot interactions.
  • As AI language models continue to advance, their integration with robotics could unlock transformative applications across industries, automating complex tasks and revolutionizing how robots assist humans in everyday life.
Google's AI robots are learning from watching movies – just like the rest of us

Recent News

Meta commits $1 billion to Wisconsin data center in AI infrastructure push

Meta's $1 billion Wisconsin data center represents just a fraction of its planned $65 billion AI infrastructure spending for 2024, as tech giants accelerate massive capital outlays despite growing investor demands for returns.

How Viva is building an AI-first culture through automation

Viva's enterprise-wide AI integration demonstrates how automation can transform organizational culture beyond isolated technology projects.

Why AI model scanning is critical for machine learning security

Model scanning provides organizations with systematic vulnerability detection for AI systems, addressing security gaps that traditional software protections miss in machine learning deployments.