• Home
  • Ai
  • Ai News
  • Google Introduces PaliGemma 2 Family of Open Source AI Vision Language Models

Google Introduces PaliGemma 2 Family of Open Source AI Vision-Language Models

PaliGemma 2 AI models can see, understand, and interact with visual input.

Google Introduces PaliGemma 2 Family of Open Source AI Vision-Language Models

Photo Credit: Google

PaliGemma 2 is the successor to Google’s PaliGemma which was released in May

Highlights
  • PaliGemma 2 is available in 3B, 10B, and 28B parameter sizes
  • The new vision models are built on Google’s Gemma 2 AI models
  • Google says PaliGemma 2 can describe actions and emotions in an image
Advertisement

Google introduced the successor to its PaliGemma artificial intelligence (AI) vision-language model on Thursday. Dubbed PaliGemma 2, the family of AI models improve upon the capabilities of the older generation. The Mountain View-based tech giant said the vision-language model can see, understand, and interact with visual input such as images and other visual assets. It is built using the Gemma 2 small language models (SLM) which were released in August. Interestingly, the tech giant claimed that the model can analyse emotions in the uploaded images.

Google PaliGemma AI Model

In a blog post, the tech giant detailed the new PaliGemma 2 AI model. While Google has several vision-language models, PaliGemma was the first such model in the Gemma family. Vision models are different from typical large language models (LLMs) in that they have additional encoders that can analyse visual content and convert it into familiar data form. This way, vision models can technically “see” and understand the external world.

One benefit of a smaller vision model is that it can be used for a large number of applications as smaller models are optimised for speed and accuracy. With PaliGemma 2 being open-sourced, developers can use its capabilities to build into apps.

The PaliGemma 2 comes in three different parameter sizes of 3 billion, 10 billion, and 28 billion. It is also available in 224p, 448p, 896p resolutions. Due to this, the tech giant claims that it is easy to optimise the AI model's performance for a wide range of tasks. Google says it generates detailed, contextually relevant captions for images. It can not only identify objects but also describe actions, emotions, and overall narrative of the scene.

Google highlighted that the tool can be used for chemical formula recognition, music score recognition, spatial reasoning, and chest X-ray report generation. The company has also published a paper in the online pre-print journal arXiv.

Developers and AI enthusiasts can download the PaliGemma 2 model and its code on Hugging Face and Kaggle here and here. The AI model supports frameworks such as Hugging Face Transformers, Keras, PyTorch, JAX, and Gemma.cpp.

Comments

Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2025 hub.

Akash Dutta
Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
No GTA 6 Trailer a Year After Reveal as Fans Wait for Official Update From Rockstar Games
Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News

Advertisement

Follow Us
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »