Google DeepMind has shattered conventional boundaries in robotics AI with the unveiling of Gemini Robotics, a suite of models built upon the formidable foundation of Gemini 2.0. This isn’t just an incremental upgrade; it’s a paradigm shift, propelling AI from the digital realm into the tangible world with unprecedented “embodied reasoning” capabilities.
Gemini Robotics: Bridging the Gap Between Digital Intelligence and Physical Action
At the heart of this innovation lies Gemini Robotics, an advanced vision-language-action (VLA) model that transcends traditional AI limitations. By introducing physical actions as a direct output modality, Gemini Robotics empowers robots to autonomously execute tasks with a level of understanding and adaptability previously unattainable. Complementing this is Gemini Robotics-ER (Embodied Reasoning), a specialized model engineered to refine spatial understanding, enabling roboticists to seamlessly integrate Gemini’s cognitive prowess into existing robotic architectures.
These models herald a new era of robotics, promising to unlock a diverse spectrum of real-world applications. Google DeepMind’s strategic partnerships with industry leaders like Apptronik, for the integration of Gemini 2.0 into humanoid robots, and collaborations with trusted testers, underscore the transformative potential of this technology.
Key Technological Advancements:
- Unparalleled Generality: Gemini Robotics leverages Gemini’s robust world model to generalize across novel scenarios, achieving superior performance on rigorous generalization benchmarks compared to state-of-the-art VLA models.
- Intuitive Interactivity: Built on Gemini 2.0’s language understanding, the model facilitates fluid human-robot interaction through natural language commands, dynamically adapting to environmental changes and user input.
- Advanced Dexterity: The model demonstrates remarkable dexterity, executing complex manipulation tasks like origami folding and intricate object handling, showcasing a significant leap in robotic fine motor control.
- Versatile Embodiment: Gemini Robotics’ adaptability extends to various robotic platforms, from bi-arm systems like ALOHA 2 and Franka arms to advanced humanoid robots like Apptronik’s Apollo.
Gemini Robotics-ER: Pioneering Spatial Intelligence
Gemini Robotics-ER elevates spatial reasoning, a critical component for effective robotic operation. By enhancing capabilities such as pointing, 3D object detection, and spatial understanding, this model enables robots to perform tasks with heightened precision and efficiency.
Gemini 2.0: Enabling Zero and Few-Shot Robot Control
A defining feature of Gemini 2.0 is its ability to facilitate zero and few-shot robot control. This eliminates the need for extensive robot action data training, enabling robots to perform complex tasks “out of the box.” By uniting perception, state estimation, spatial reasoning, planning, and control within a single model, Gemini 2.0 surpasses previous multi-model approaches.
- Zero-Shot Control via Code Generation: Gemini Robotics-ER leverages its code generation capabilities and embodied reasoning to control robots using API commands, reacting and replanning as needed. The model’s enhanced embodied understanding results in a near 2x improvement in task completion compared to Gemini 2.0.
- Few-Shot Control via In-Context Learning (ICL): By conditioning the model on a small number of demonstrations, Gemini Robotics-ER can quickly adapt to new behaviors.
Below is the perception and control APIs, and agentic orchestration during an episode. This system is used for zero-shot control:

Commitment to Safety
Google DeepMind prioritizes safety through a multi-layered approach, addressing concerns from low-level motor control to high-level semantic understanding. The integration of Gemini Robotics-ER with existing safety-critical controllers and the development of mechanisms to prevent unsafe actions underscore this commitment.
The release of the ASIMOV dataset and the framework for generating data-driven “Robot Constitutions” further demonstrates Google DeepMind’s dedication to advancing robotics safety research.
Intelligent robots are getting closer…
Check out the full Gemini Robotics report and Gemini Robotics. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.