News

Enterprise

Artificial Intelligence

Americas

Google DeepMind Unveils Gemini Robotics-ER 1.6 to Enhance Autonomous Embodied Reasoning

Google DeepMind has launched Gemini Robotics-ER 1.6, a specialized reasoning model designed to give physical robots unprecedented spatial intelligence and autonomy.

Google DeepMind has launched Gemini Robotics-ER 1.6, a specialized reasoning model designed to give physical robots unprecedented spatial intelligence and autonomy.

NewDecoded

Published Apr 16, 2026

Apr 16, 2026

3 min read

Image by Google

Google DeepMind announced the release of Gemini Robotics-ER 1.6 on April 14, 2026. This new model acts as a high-level orchestration engine that enables robots to understand complex physical environments with unprecedented precision. By focusing on embodied reasoning, the system allows machines to transition from simple instruction following to making sophisticated logical decisions in real-world settings.

Breakthroughs in Spatial Logic

The update introduces significant improvements in spatial logic, specifically through advanced pointing and counting capabilities. Gemini Robotics-ER 1.6 uses precision pointing as an intermediate reasoning step to map trajectories and identify optimal grasp points. This version also significantly reduces hallucinations, ensuring robots do not attempt to interact with objects that are absent from their immediate view. A major hurdle in robotics is knowing exactly when a task has been completed successfully. The 1.6 model addresses this via multi-view success detection, aggregating data from several camera feeds like overhead and wrist-mounted sensors. This allows the agent to intelligently decide whether to progress with a plan or retry a failed attempt, even in dynamic or occluded spaces.

Industrial Innovation with Boston Dynamics

In collaboration with Boston Dynamics, DeepMind developed a breakthrough instrument-reading capability. Utilizing a process called agentic vision, the model combines visual reasoning with programmatic code execution to read analog gauges and sight glasses. This technology allows the Boston Dynamics Spot robot to autonomously monitor industrial facilities with up to 93 percent accuracy. Safety remains a core priority, with the 1.6 release being hailed as the safest robotics model to date. It adheres strictly to physical safety constraints, such as refusing to handle hazardous liquids or objects exceeding specific weight limits. Tests on the ASIMOV benchmark showed a 10 percent improvement in perceiving injury risks in video scenarios compared to previous general-purpose models.

Developer Access and Collaboration

Developers can access Gemini Robotics-ER 1.6 starting today through the Gemini API and Google AI Studio. The launch includes a developer Colab with practical examples for configuring embodied reasoning tasks. DeepMind is also encouraging the community to submit specialized data to further refine the robustness of future releases.

Decoded Take

Decoded Take

Decoded Take

This release signals the maturation of the strategist layer in robotics architecture, moving beyond motor control toward genuine cognitive awareness. By integrating agentic vision and native tool calling, DeepMind is providing the software infrastructure necessary for robots to function in unmapped, legacy industrial environments. This shifts the burden of sensing from expensive hardware upgrades to sophisticated AI reasoning, potentially accelerating the deployment of autonomous inspectors across global manufacturing and energy sectors.

Share this article

Related Articles