Search

Google DeepMind Unveils Gemini Robotics AI Models That Can Control Robots in the Real World

Google DeepMind unveiled Gemini Robotics and Gemini Robotics-ER (embodied reasoning) AI models.

Advertisement
Highlights
  • Google is partnering with Apptronik to build humanoid robots
  • Gemini Robotics offer generality, interactivity, and dexterity
  • The models were trained on data from the robotic platform Aloha 2
Google DeepMind Unveils Gemini Robotics AI Models That Can Control Robots in the Real World

Gemini Robotics-ER focuses on spatial reasoning in real-world environments

Photo Credit: Google

Google DeepMind unveiled two new artificial intelligence (AI) models on Thursday, which can control robots to make them perform a wide range of tasks in real-world environments. Dubbed Gemini Robotics and Gemini Robotics-ER (embodied reasoning), these are advanced vision language models capable of displaying spatial intelligence and performing actions. The Mountain View-based tech giant also revealed that it is partnering with Apptronik to build Gemini 2.0-powered humanoid robots. The company is also testing these models to evaluate them further, and understand how to make them better.

Google DeepMind Unveils Gemini Robotics AI Models

In a blog post, DeepMind detailed the new AI models for robots. Carolina Parada, the Senior Director and Head of Robotics at Google DeepMind, said that for AI to be helpful to people in the physical world, they would have to demonstrate “embodied” reasoning — the ability to interact and understand the physical world and perform actions to complete tasks.

Gemini Robotics, the first of the two AI models, is an advanced vision-language-action (VLA) model which was built using the Gemini 2.0 model. It has a new output modality of “physical actions” which allows the model to directly control robots.

DeepMind highlighted that to be useful in the physical world, AI models for robotics require three key capabilities — generality, interactivity, and dexterity. Generality refers to a model's ability to adapt to different situations. Gemini Robotics is “adept at dealing with new objects, diverse instructions, and new environments,” claimed the company. Based on internal testing, the researchers found the AI model more than doubles the performance on a comprehensive generalisation benchmark.

The AI model's interactivity is built on the foundation of Gemini 2.0, and it can understand and respond to commands phrased in everyday, conversational language and different languages. Google claimed that the model also continuously monitors its surroundings, detects changes to the environment or instructions, and adjusts its actions based on the input.

Finally, DeepMind claimed that Gemini Robotics can perform extremely complex, multi-step tasks that require precise manipulation of the physical environment. The researchers said the AI model can control robots to fold a piece of paper or pack a snack into a bag.

The second AI model, Gemini Robotics-ER, is also a vision language model but it focuses on spatial reasoning. Drawing from Gemini 2.0's coding and 3D detection, the AI model is said to display the ability to understand the right moves to manipulate an object in the real world. Highlighting an example, Parada said when the model was shown a coffee mug, it was able to generate a command for a two-finger grasp to pick it up by the handle along a safe trajectory.

The AI model performs a large number of steps necessary to control a robot in the physical world, including perception, state estimation, spatial understanding, planning, and code generation. Notably, neither of the two AI models is currently available in the public domain. DeepMind will likely first integrate the AI model into a humanoid robot and evaluate its capabilities, before releasing the technology.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

 
Show Full Article
Please wait...
Advertisement

Related Stories

Popular Mobile Brands
  1. Nothing Phone 3a Review: Design Over Everything?
  2. Vivo V50 Lite 4G With 50-Megapixel Main Camera, 6,500mAh Battery Launched
  3. Apple Rolls Out Latest iOS 18.4 Beta Update With Several Bug Fixes
  4. Kota Factory Season 4 OTT Release: Expected Release Date, Cast, and More
  5. Samsung Galaxy S24 Series Said to Get LOG Video With New One UI 7 Beta
  6. Here's How Much Apple's Foldable iPhone May Cost
  7. Samsung Announces One UI 7 Release Date for Galaxy S24 and More Phones
  8. Infinix Note 50X 5G Design, Colour Options, Chipset Details Revealed
  9. Gentlewoman OTT Release: Tamil Crime Drama's Streaming Details Reportedly Revealed
  10. Pixel 9a Pricing Leaked by Retailer; Design Tipped via Early Review Video
  1. OpenAI to Reportedly Begin Testing ChatGPT Connectors for Slack and Google Drive
  2. Snapchat Spectacles Get GPS-Powered AR Lenses and New Hand Tracking Capabilities
  3. Realme C75, Realme C71 Will Reportedly Launch in India on March 25; Colourways, RAM and Storage Options Tipped
  4. Boat Storm Infinity Smartwatch India Launch Date Set for March 25; Design, Key Features Teased
  5. Ripple Signals Intent to Launch Wallet Service in New Trademark Filing: Reports
  6. OneXSugar With Snapdragon G3 Gen 3, Dual Screens Unveiled Alongside Ayaneo Pocket S2, Ayaneo Gaming Pad at GDC 2025
  7. Apple Loses Top Court Fight Against German Antitrust Crackdown
  8. Apple's Foldable iPhone May Cost More Than Samsung Galaxy Z Fold 6, Google Pixel 9 Pro Fold
  9. Infinix Xpad GT Moniker, RAM, Storage Option Surfaces Online; May Launch Soon
  10. Zoom AI Companion Is Being Upgraded With Agentic Capabilities and New AI Features
Gadgets 360 is available in
Download Our Apps
App Store App Store
Available in Hindi
App Store
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »