×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Google’s AI robots are making significant strides in learning by watching videos, demonstrating the potential for transformative applications of advanced language models like Gemini 1.5 Pro in robotics.

Video training enables robots to navigate and complete tasks: By watching video tours of designated areas, Google’s RT-2 robots equipped with the Gemini 1.5 Pro AI model can learn about their environment and carry out requests at their destination:

  • The AI’s long context window allows it to process extensive information simultaneously, enabling the robot to absorb details from the video to complete tasks based on its learned knowledge.
  • In practical tests within a 9,000-square-foot area, the Gemini-powered robots successfully followed over 50 different user instructions with a 90% success rate.

Multi-step task planning demonstrates advanced understanding: One notable aspect of the Gemini 1.5 Pro model is its ability to plan and execute multi-step tasks, going beyond simple single-step orders:

  • For example, the robots can answer questions like whether a specific drink is available by navigating to a fridge, visually processing the contents, and returning with the answer.
  • This demonstrates a level of understanding and execution that indicates the AI model’s capacity for planning and carrying out sequences of actions.

Potential real-world applications, but limitations remain: While not ready for consumer use, the integration of advanced AI models like Gemini 1.5 Pro into robotics could transform industries such as healthcare, shipping, and janitorial services in the future. However, challenges remain:

  • The robots currently take up to 30 seconds to process each instruction, much slower than a human performing the same task.
  • Real-world environments like homes and offices are far more chaotic and difficult for robots to navigate compared to controlled research settings.

Broader implications for AI-powered robotics: Despite current limitations, Google DeepMind’s research represents a significant leap forward in the field of AI-driven robotics:

  • Teaching robots to learn from videos in a manner reminiscent of human interns showcases the potential for more natural and intuitive human-robot interactions.
  • As AI language models continue to advance, their integration with robotics could unlock transformative applications across industries, automating complex tasks and revolutionizing how robots assist humans in everyday life.
Google's AI robots are learning from watching movies – just like the rest of us

Recent News

71% of Investment Bankers Now Use ChatGPT, Survey Finds

Investment banks are increasingly adopting AI, with smaller firms leading the way and larger institutions seeing higher potential value per employee.

Scientists are Designing “Humanity’s Last Exam” to Assess Powerful AI

The unprecedented test aims to assess AI capabilities across diverse fields, from rocketry to philosophy, with experts submitting challenging questions beyond current benchmarks.

Hume Launches ‘EVI 2’ AI Voice Model with Emotional Responsiveness

The new AI voice model offers improved naturalness, faster response times, and customizable voices, potentially enhancing AI-human interactions across various industries.