Photo Credit: Johannes Eisele/ AFP
Google's latest smartphone demonstrates how artificial intelligence and software can enhance a camera's capabilities, one of the most important selling points of any mobile device.
The Pixel 4, the latest entrant in a phone line defined by its cameras, touts an upgraded ability to zoom in when shooting photos as its biggest upgrade. But the Alphabet company isn't going about it the way that Samsung Electronics, Huawei Technologies, or Apple have done - instead of adding multiple cameras with complicated optics, Google has opted for a single extra lens that relies on AI and processing to fill in the quality gap.
In place of the usual spec barrage, Google prefers to talk about a "software-defined camera," Isaac Reynolds, product manager on the company's Pixel team, said in an interview. The device should be judged by the end-product, he argued, which Google claims is a 3x digital zoom that matches the quality of optical zoom from multi-lens arrays. The Pixel 4 has two lenses with a magnification factor between them that's less than 2x, and the tech that extends that useful range is almost entirely software.
The success of the Pixel's camera is instrumental to Google's broader ambitions: it drives Google Photos adoption, provides more fodder for Google's image libraries, and helps create better experiences with augmented-reality applications -- such as this year's new on-screen walking directions in Google Maps.
Super Res Zoom, a feature Google launched last year, uses the slight hand movements of a photographer when capturing a shot -- usually a hurdle to creating crisp images -- as an advantage in crafting an image that's sharper than it otherwise would be. The camera shoots a burst of quick takes, each one from a slightly different position because of the camera shake, then combines them into a single image. It's an algorithmic trick that lets Google collect more information from imaging hardware, and potentially also a moat against any rivals trying to copy Google -- because others can't just buy the same imaging sensors and replicate the results.
To augment its reliance on AI and machine-learning tasks, Google has designed and added its own Pixel Neural Core chip for the Pixel 4 lineup. It accelerates the machine-learning speed of the device and, again, is intended to differentiate Google's offering from other Android smartphones on the market with a Qualcomm Snapdragon processor at its core.
The other major tool in Google's AI kit is called RAISR, or Rapid and Accurate Image Super Resolution, which trains AI on vast libraries of images so it can more effectively enhance the resolution of images. The system is taught to recognise particular patterns, edges and visual features, so that when it detects them in lower-quality shots, it knows how to improve them. That's key to creating zoom with "a lot smoother quality degradation," as Reynolds put it. With more than a billion Google Photos users, the US company has a massive supply of images to train its software on.
Among the other features that Google offers with the Pixel 4 is the ability to identify the faces of people that a user photographs most often and ensure that they're prioritised when capturing new snapshots -- making sure the camera focuses on them and that their eyes aren't closed, for instance. That use of software technology has defined Google's devices to date and is also evident in the way Facebook, Amazon.com and Apple aim to employ their own AI systems.
© 2019 Bloomberg LP
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.