Google Clips AI-Based Camera Was Trained With the Help of Pro Photographers

Google Clips AI-Based Camera Was Trained With the Help of Pro Photographers
Highlights
  • With human-centred machine learning, Clips captures meaningful images
  • In a blog, Google describes how Clips camera works
  • Google admits that training a camera like Clips can never be bug-free
Advertisement

Last October, Google unveiled an automatic camera called Google Clips. The Clips camera was designed to hold off from taking any picture until it sees the faces or frames it recognises as a good image. The intelligent camera has been designed to capture candid moments of familiar people and pets by using on-device machine intelligence. Google over the weekend began selling the camera priced at $249 (roughly Rs. 16,200), and, it's already 'out-of-stock' on Google's product store.

How does the Google Clips camera understand what makes for a beautiful and memorable photograph? In a blog post, Josh Lovejoy, UX Designer for Google, explained the process that his team used to integrate "human-centred approach" and an "AI-powered product". Google wants the camera to avoid taking a number of shots of the same subjects and find one or two good ones. With human-centred machine learning, the camera is able to learn and select photos that are meaningful to users.

In order to feed examples into the algorithms in the camera, to identify the best images, Google called in professional photographers. Google hired a documentary filmmaker, a photojournalist, and a fine arts photographer to gather visual data to train the neural network powering the Clips camera. Josh Lovejoy wrote, "Together, we began gathering footage from people on the team and trying to answer the question, 'What makes a memorable moment?'"

Notably, Google admits that training a camera like Clips can never be bug-free, regardless of how much data is provided to the device. It may recognise a well-framed, well-focussd shot but it could miss some important event. However, in the blog, Lovejoy says, "But it's precisely this fuzziness that makes ML so useful! It's what helps us craft dramatically more robust and dynamic 'if' statements, where we can design something to the effect of "when something looks sort of like x, do y."

The blog essentially describes how the company's UX engineers have been able to apply a new tool to embed human-centred design into projects like the Clips camera. In another blog post on Medium, Josh Lovejoy had explained the seven core principles behind human-centred machine learning.

It is also interesting to note that chief executive of Tesla, SpaceX, and SolarCity, Elon Musk, back in October had taken a jibe at Google clips camera saying, "This doesn't even seem innocent."

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Can Sea of Thieves be the Xbox One's No Man's Sky?
Nokia Introduces High-Capacity 5G Chipsets, to Ship in Volume in Third Quarter
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »