• Home
  • Ai
  • Ai News
  • Google Unveils SignGemma, an AI Model That Can Translate Sign Language Into Spoken Text

Google Unveils SignGemma, an AI Model That Can Translate Sign Language Into Spoken Text

Google highlighted that SignGemma will be an open-source AI model when it is released.

Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News
Google Unveils SignGemma, an AI Model That Can Translate Sign Language Into Spoken Text

Photo Credit: X/Google DeepMind

SignGemma is expected to be released near the end of the year

Highlights
  • SignGemma is being developed by Google DeepMind
  • It will be part of the Gemma series of AI models
  • The AI model was first showcased at the Google I/O 2025 keynote
Advertisement

Google has announced SignGemma, a new artificial intelligence (AI) model that can translate sign language into spoken text. The model, which will be part of the Gemma series of models, is currently being tested by the Mountain View-based tech giant and is expected to be launched later this year. Similar to all the other Gemma models, SignGemma will also be an open-source AI model, available to individuals and businesses. It was first showcased during the Google I/O 2025 keynote, and it is designed to help people with speech and hearing disabilities effectively communicate with even those who do not understand sign language.

SignGemma Can Track Hand Movements and Facial Expressions

In a post on X (formerly known as Twitter), the official handle of Google DeepMind shared a demo of the AI model and some details about its release date. However, this is not the first time we have seen SignGemma. It was also briefly showcased at the Google I/O event by Gus Martin, Gemma Product Manager at DeepMind.

During the showcase, Martins highlighted that the AI model is capable of providing text translation from sign language in real-time, making face-to-face communication seamless. The model was also trained on the datasets of different styles of sign languages, however, it performs the best with the American Sign Language (ASL) when translating it into the English language.

According to MultiLingual, since it is an open-source model, SignGemma can function without needing to connect to the Internet. This makes it suitable to use in areas with limited connectivity. It is said to be built on the Gemini Nano framework and uses a vision transformer to track and analyse hand movements, shapes, and facial expressions. Beyond making it available to developers, Google could integrate the model into its existing AI tools, such as Gemini Live.

Calling it “our most capable model for translating sign language into spoken text,” DeepMind highlighted that it will be released later this year. The accessibility-focused large language model is currently in its early testing phase, and the tech giant has published an interest form to invite individuals to try it out and provide feedback.

Play Video
Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Akash Dutta
Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In hi... more  »
Lava Bold N1, Lava Bold N1 Pro With 5,000mAh Battery Launched in India: Price, Specifications
OnePlus 13s Price in India Leaked Ahead of Global Launch on June 5

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »