News/Physical AI

Apr 15, 2025

Hugging Face brings open-source revolution to humanoid robotics with Pollen acquisition

Hugging Face's acquisition of Pollen Robotics marks a significant step toward democratizing humanoid robotics through open-source development. By purchasing the company behind the two-armed Reachy 2 robot, Hugging Face is extending the open-source ethos that has accelerated AI progress into the physical robotics domain, potentially addressing the transparency challenges that have plagued recent humanoid robot demonstrations and development. The big picture: Hugging Face plans to sell Pollen Robotics' humanoid robot Reachy 2 while making its code openly available for developers to download, modify, and improve upon. "It's really important for robotics to be as open source as possible," says Clément...

read
Apr 4, 2025

Waymo robotaxis and woolly mice steal the spotlight at SXSW 2025

SXSW 2025 showcased unexpected AI frontrunners with Waymo robotaxis via Uber stealing attention alongside Colossal Biosciences' genetically engineered Woolly Mouse. While many expected advanced AI assistants to dominate the conversation, transportation and biotech innovations captured the spotlight instead, reflecting how AI integration into consumer-facing technologies is accelerating across diverse sectors beyond digital assistants. Top AI Stars of SXSW 2025: 1. Waymo robotaxis through Uber The autonomous taxis made their Austin debut through integration with the Uber app, marking a significant milestone in consumer-facing autonomous transportation. This practical application of AI technology demonstrated how self-driving vehicles are moving from theoretical showcases...

read
Mar 19, 2025

Swedish startup creates robot dog that learns like animals, not algorithms

Swedish startup IntuiCell has created a revolutionary robot dog called Luna with a digital nervous system that learns and adapts naturally like living organisms rather than relying on massive datasets or pre-training. This represents one of the first practical applications of physical agentic AI—artificial intelligence capable of making decisions and taking actions toward specific goals autonomously—and could transform how robots learn to navigate unpredictable environments, from space exploration to disaster response. The big picture: IntuiCell is pioneering an entirely different approach to robot learning by creating machines with nervous systems that learn through real-world interactions rather than pre-programmed responses or...

read
Mar 13, 2025

Princeton study: AI robots learn better with zero feedback during training

Just back off and let them figure it out? Princeton researchers have discovered a counterintuitive approach to AI training that challenges conventional wisdom in reinforcement learning. By giving simulated robots difficult tasks with absolutely no feedback—rather than incrementally rewarding progress—they found the AI systems naturally developed exploration skills and completed tasks more efficiently. This finding could significantly simplify AI training processes while potentially leading to more innovative problem-solving behaviors in artificial intelligence systems. The big picture: Princeton researchers found that AI robots learn better when given zero feedback during training, contradicting standard reinforcement learning practices that rely on rewards and...

read
Mar 13, 2025

Google DeepMind’s new AI models enable robots to understand, adapt to complex tasks on the fly

Google DeepMind is pushing the boundaries of robotics with new AI models designed to transform how robots interact with the physical world. These advances mark a crucial step toward bridging the gap between today's specialized industrial robots and future general-purpose robot assistants capable of understanding and adapting to complex environments autonomously. This development addresses one of the most challenging aspects of robotics: creating AI systems sophisticated enough to control robots safely through novel situations. The big picture: Google DeepMind has introduced two specialized AI models—Gemini Robotics and Gemini Robotics-ER—built on its Gemini 2.0 foundation to serve as sophisticated "brains" for...

read
Mar 5, 2025

AI job disruption coming faster than most people think, warns researcher

Don't blink: The rapid advancement of artificial intelligence is poised to transform the global workforce far more dramatically and swiftly than commonly believed. While many still view AI primarily through the lens of chatbots like ChatGPT, RethinkX's research director Adam Dorr warns that the technology's impact on employment will be profound and imminent, challenging conventional wisdom about the timeline of workplace automation. The big picture: AI and robotics are accelerating toward a tipping point that could fundamentally reshape the job market faster than most experts and workers anticipate. Key details: RethinkX's analysis suggests that AI-driven automation will disrupt employment across...

read
Feb 25, 2025

Take the AI hype down a notch, says expert Rodney Brooks

Are folks failing to question AI's capabilities for fear of looking like party poopers? In 2025, renowned roboticist and artificial intelligence expert Rodney Brooks continues to be a leading voice advocating for realistic expectations around AI capabilities. His perspective, shaped by decades of experience in robotics and AI development, offers a pragmatic counterpoint to widespread technological hyperbole. Key Analysis: Brooks identifies a concerning trend in tech discourse where professionals avoid questioning AI hype out of fear of appearing technologically pessimistic. He coined the term "FOBAWTPALSL" (Fear of Being a Wimpy Techno-Pessimist and Looking Stupid Later) to describe this phenomenon This...

read
Feb 19, 2025

Humanoid robot Ameca debuts to public in Cornwall, avoids dreaded ‘uncanny valley’

The emergence of humanoid robots that can mimic human expressions and gestures has reached a new milestone with the public debut of Ameca at the Cornwall Festival of Tech in the UK. Built by Falmouth-based company Engineered Arts, this sophisticated robot showcases advanced facial expression capabilities while maintaining a deliberately non-realistic appearance to avoid the "uncanny valley" effect. Development and Design Philosophy: Engineered Arts focused on creating a robot that excels at non-verbal communication while serving as a platform for artificial intelligence development. CEO Will Jackson emphasizes the importance of facial expressions as high-bandwidth communication tools, enabling the robot to...

read
Feb 14, 2025

Bot by bot, AI humanoids move forward with $350M investment for Apptronik

Humanoid robots have been a focus of technological development for years, with companies working to create AI-powered assistants that can work alongside humans. Apptronik, a robotics lab founded in 2016, has made significant progress with its human-sized robot Apollo, securing major funding to accelerate its deployment. Major funding milestone: Apptronik has secured a $350 million Series A funding round co-led by B Capital and Capital Factory, with participation from Google's AI lab DeepMind. The funding will support Apollo's deployment, company expansion, and continued innovation The investment reflects growing confidence in humanoid robotics as a solution for various societal challenges B...

read
Feb 8, 2025

Apple’s ELEGNT AI aims to make home robots feel like companions

Apple has developed a new framework called ELEGNT that enables robots to move more naturally and expressively when interacting with humans, potentially making them more engaging as home assistants. The breakthrough technology: The Expressive and Functional Movement Design (ELEGNT) framework allows non-humanoid robots to communicate intentions, emotions, and attitudes through their movements while performing tasks. The system was tested using a lamp-like robot with a 6-axis robotic arm and a head containing a light and projector Researchers programmed both functional movements for completing tasks and expressive movements to convey the robot's internal state The design was inspired by animated characters,...

read
Feb 7, 2025

Hugging Face’s new open-source AI model lets robots follow verbal commands

Companies Hugging Face and Physical Intelligence have launched Pi0, a groundbreaking open-source foundational model that enables robots to translate natural language commands directly into physical actions. The breakthrough explained: Pi0 represents the first widely available foundation model for robots that can understand and execute verbal commands, similar to how ChatGPT processes text. The model operates on Hugging Face's LeRobot platform and can handle complex tasks like folding laundry, bussing tables, and packing groceries Pi0 was trained using data from seven different robotic platforms across 68 unique tasks The technology employs flow matching to generate smooth, real-time action trajectories at 50Hz,...

read
Jan 29, 2025

Berkeley researchers develop new AI system that trains robots to master complex skills

Berkeley Researchers have developed an AI-powered training system that enables robots to master complex tasks like Jenga whipping and motherboard assembly with 100% accuracy in just hours. Key innovation: UC Berkeley's Robotic AI and Learning Lab has created a novel training method combining human demonstration, feedback, and real-world practice to teach robots intricate tasks. The system achieves perfect success rates for complicated tasks including Jenga whipping, egg flipping, and electronics assembly Training time is remarkably efficient, with robots mastering new skills within one to two hours The method uses reinforcement learning, where robots learn from both successes and failures in...

read
Jan 24, 2025

Why ‘Physical AI’ is lauded as the next major frontier for AI

Physical AI represents a significant advancement in artificial intelligence, combining machine learning with real-world physical interactions and control. Core concept explained: Physical AI, also known as Generative Physical AI, extends beyond traditional AI by incorporating direct interaction with and understanding of the physical world. This new approach aims to bridge the gap between AI's current text-based knowledge and the kind of intuitive physical understanding that humans develop through real-world experience Physical AI systems are being designed to control machines, robots, and other physical devices with greater sophistication and real-world awareness The technology builds upon existing generative AI capabilities while adding...

read
Jan 22, 2025

Physical AI merging intelligence and robotics to revolutionize real-world interactions

Physical AI represents a new frontier where digital intelligence merges with mechanical systems, enabling robots to interact intelligently with the physical world through sophisticated algorithms and precise movements. The fundamentals: Physical AI combines artificial intelligence with robotics to mimic both human intellect and physical capabilities, using neural networks that translate computational frameworks into mechanical actions. The system architecture focuses on replicating human-like decision-making, pattern recognition, and coordinated physical movements Neural networks process and convert data into mechanical actions within milliseconds Unlike traditional robots, Physical AI systems can adapt to unpredictable physical interactions and environmental variables Technical architecture: Modern Physical AI...

read
Jan 22, 2025

NVIDIA’s Omniverse: How OpenUSD workflows advance physical AI for robotics and vehicles

Core innovation: NVIDIA recently unveiled Cosmos, a platform of generative world foundation models designed to accelerate the development of physical AI systems through advanced simulation and synthetic data generation. The platform includes state-of-the-art models, tokenizers, guardrails, and video processing capabilities specifically built for physical AI applications Cosmos enables the creation of detailed virtual environments that incorporate real-world physics, spatial relationships, and cause-and-effect principles The technology is particularly focused on applications in robotics, autonomous vehicles, and vision AI systems Technical capabilities: When integrated with NVIDIA Omniverse and powered by OpenUSD, Cosmos creates a powerful synthetic data generation engine for AI development....

read
Jan 21, 2025

Why robots are struggling to match the dexterity of human hands

The continued advancement of AI-powered robotics is bringing machines closer to matching human dexterity, though significant challenges remain in replicating the complexity of natural hand movements in particular. The complexity of human hands; The human hand contains over 30 muscles, 27 joints, and 17,000 touch receptors, enabling an extraordinary range of precise movements and sensory capabilities. The intricate network of ligaments and tendons provides 27 degrees of freedom, allowing for complex manipulations and fine motor control Even simple tasks like picking up a pen require seamless integration between sensory feedback and motor control The development of hand dexterity begins in...

read
Jan 20, 2025

Man with paralysis flies virtual drone using brain implant

A paralyzed man successfully piloted a virtual drone through thought alone using a brain-computer interface and AI-powered signal interpretation technology. The breakthrough technology: A brain-computer interface with 192 implanted electrodes allows the user to control a virtual drone by imagining finger movements. The system was developed by researchers at the University of Michigan, led by Matthew Willsey An anonymous participant with tetraplegia, who had previously received a Blackrock Neurotech brain implant, demonstrated the technology The interface translates brain signals from imagined finger movements into four distinct control inputs for drone operation How it works: An AI model interprets complex neural...

read
Jan 18, 2025

This humanoid robot learned to waltz by mirroring human movements

A new AI system called ExBody2 enables humanoid robots to mirror human movements with unprecedented fluidity, allowing them to perform complex actions like dancing, walking, and fighting moves. Key innovation: Researchers at the University of California, San Diego have developed an AI system that helps robots learn and replicate human movements more naturally than traditional pre-programmed sequences. The system uses motion capture recordings from hundreds of human volunteers to build a comprehensive database of movements ExBody2 employs reinforcement learning to teach robots how to perform various actions through trial and error The AI first learns using complete virtual robot data...

read
Jan 9, 2025

Why ‘World Foundation Models’ are key to unlocking Physical AI and robotics

NVIDIA has unveiled Cosmos, a new platform featuring world foundation models (WFMs) designed to advance physical AI systems through enhanced environmental simulation capabilities. The core technology: World foundation models are neural networks that can simulate physical environments and predict how scenes will evolve based on various inputs and actions. These models can generate detailed videos from text or image inputs while predicting scene evolution through a combination of current state data and control signals WFMs provide virtual 3D environments for testing AI systems without the risks and costs of real-world trials The technology enables the generation of synthetic training data,...

read
Jan 8, 2025

Nvidia’s Mega Omniverse framework will be a boon to industrial robot fleets

The Nvidia Omniverse "Mega" Blueprint introduces a comprehensive framework for creating digital twins of industrial robot fleets, announced at CES 2025. Core technology announcement; Nvidia's Mega framework enables companies to develop, test, and optimize AI-powered robot fleets in virtual environments before real-world implementation. The system leverages Omniverse Cloud Sensor RTX APIs to deliver high-fidelity sensor simulation at scale Digital twins can be created using various data sources including CAD files, video feeds, lidar scans, and AI-generated content The framework integrates with Nvidia Isaac ROS for testing AI robot capabilities across unlimited virtual scenarios Key industry partnerships; Supply chain leader Kion...

read
Jan 8, 2025

NVIDIA makes its Cosmos World Foundation Models openly available to physical AI developer community

Announced at CES 2025, NVIDIA has released a suite of open-source world foundation models called Cosmos to accelerate the development of physical AI applications in robotics and autonomous vehicles. Core announcement: NVIDIA's Cosmos platform introduces world foundation models (WFMs) that can predict and generate physics-aware videos of virtual environments, making advanced AI development more accessible to developers of all sizes. The models are being released under NVIDIA's permissive open model license, allowing for commercial usage These models have been trained on 9,000 trillion tokens from 20 million hours of real-world data Leading companies including Uber, Waabi, and Agility Robotics are...

read
Jan 6, 2025

MIT breakthrough gives robots ability to design, 3D print and understand physical environments

MIT's AI Lab Director Daniela Rus outlines how artificial intelligence is poised to make significant advances in physical world applications through the development of "physical intelligence" in 2025. The emerging frontier: Physical intelligence represents a fusion of digital AI capabilities with robotics, designed to help machines understand and interact with the real world in ways current AI systems cannot. Traditional AI models excel at generating digital content but struggle with real-world applications like self-driving cars due to their lack of physical understanding Physical intelligence systems are specifically designed to understand physics, cause-and-effect relationships, and adapt to dynamic environments This new...

read
Dec 30, 2024

How a Bay Area artist is combining AI and dance to allay fears about robots replacing humans

San Francisco's Exploratorium artist-in-residence Catie Cuan is combining dance, robotics, and artificial intelligence to create innovative human-robot interactions and performances. Background and expertise: Catie Cuan brings a unique combination of professional dance experience and mechanical engineering expertise to her work at the Exploratorium. Cuan earned her doctorate in mechanical engineering from Stanford University, focusing her thesis on choreorobotics She describes her work as a natural fusion of her passions for mathematics and dance The Exploratorium selected Cuan as part of their 50th anniversary Artist in Residence program Current projects and innovations: During her two-year residency that began in 2023, Cuan...

read
Dec 16, 2024

AI pioneer Fei-Fei Li’s mission to unlock advanced AI through ‘spatial intelligence’

Background and historical context: Stanford professor Fei-Fei Li, known for creating the groundbreaking ImageNet dataset, has launched World Labs, a startup focused on developing AI systems with sophisticated spatial awareness and 3D understanding. Li's ImageNet project and its 2012 competition marked a turning point in AI history when the neural network AlexNet demonstrated unprecedented object recognition capabilities This breakthrough helped catalyze the deep learning revolution, leveraging vast internet training data and GPU computing power Li also cofounded Stanford's Institute for Human-Centered AI (HAI) to advance computer vision research Core technology and innovation: World Labs is developing AI systems that can...

read
Load More