×
Open-source robot brain SPEAR-1 enhances industrial robots with 3D vision
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

European researchers have released SPEAR-1, an open-source AI model that serves as a “brain” for industrial robots, enabling them to grasp and manipulate objects with enhanced dexterity. The model incorporates 3D data into its training, giving it a superior understanding of physical space compared to existing robot foundation models that rely primarily on 2D image data.

What you should know: SPEAR-1 represents a significant advancement in making robot intelligence more accessible through open-source development.

  • The model was developed by researchers at the Institute for Computer Science, Artificial Intelligence and Technology (INSAIT) in Bulgaria.
  • It performs roughly as well as commercial foundation models on RoboArena, a benchmark testing robots’ ability to perform tasks like squeezing ketchup bottles, closing drawers, and stapling papers.
  • SPEAR-1’s performance rivals Pi-0.5 from Physical Intelligence, a billion-dollar startup founded by leading robotics researchers.

The big picture: Just as open-source language models democratized generative AI experimentation, SPEAR-1 could accelerate innovation in robotics by giving researchers and startups access to powerful robot intelligence tools.

  • The commercial robotics race already has billions of dollars in funding, with well-funded startups including Skild, Generalist, and Physical Intelligence competing to build generally capable robots.
  • Current robot intelligence remains limited—models typically need complete retraining when switching robot arms or changing objects and environments.

How it works: SPEAR-1’s key innovation lies in incorporating 3D spatial data into its training process, addressing a fundamental mismatch in existing approaches.

  • Traditional robot foundation models are built on vision language models (VLMs) that primarily learn from labeled 2D images, limiting their understanding of physical space.
  • “Our approach tackles the mismatch between the 3D space the robot operates in and the knowledge of the VLM that forms the core of the robotic foundation model,” says Martin Vechev, a computer scientist at INSAIT and ETH Zurich.

In plain English: Most robot AI systems today learn about the world the same way humans learn from photographs—they see flat, 2D images and try to understand how objects work in real 3D space. It’s like trying to learn to drive by only looking at pictures of roads instead of actually experiencing depth, distance, and movement. SPEAR-1 trains on actual 3D data, giving it a more realistic understanding of how objects move and interact in physical space.

What they’re saying: Industry experts see SPEAR-1 as evidence of rapid progress in robotic AI capabilities.

  • “Open-weight models are crucial for advancing embodied AI,” Vechev told WIRED ahead of the release.
  • Karl Pertsch from Physical Intelligence noted the significance of academic groups building general robotic policies: “It’s really cool to see academic groups building quite general policies that can actually be evaluated across a diverse set of environments out-of-the-box, and [can] achieve non-trivial performance. This was not possible even a year ago.”

Why this matters: The development suggests that robot intelligence may follow a dual path similar to language models, with both closed commercial systems and open-source alternatives driving innovation forward.

  • Robotics researchers hope that the same recipe behind large language models—massive training data and compute power—will eventually produce robots capable of quickly adapting to new situations and environments.
  • Such advances could eventually enable humanoid robots to operate effectively in messy, unfamiliar real-world settings through general understanding of how the physical world works.
This Open Source Robot Brain Thinks in 3D

Recent News

Horror writer explores AI brain control in upcoming dystopian novel

It's "Weekend at Bernie's" meets Black Mirror, with uncomfortably plausible technology.

OpenAI launches ChatGPT Atlas browser to challenge Google’s search dominance

The browser puts ChatGPT's search bar front and center, bypassing Google entirely.

AEGIS framework maps 39 AI controls across 5 regulatory standards

A Rosetta Stone for navigating the maze of AI compliance requirements.