Google’s ‘Frankenphone’ Helped Gather Training Data for Portrait Mode on the Pixel 3

Advertisement
By Roydon Cerejo | Updated: 30 November 2018 16:49 IST
Highlights
  • A learning-based depth map is used to improve portrait results in Pixel 3
  • A special rig, with five Pixel phones was used to gather training data
  • Other computational features include Night Sight and Super Res Zoom

Google has been leading the charge of computational photography in smartphones ever since the first Pixel, when it introduced its HDR+ technology. Since then, the search giant has not looked back and with the introduction of the Pixel 3 series, it's evident that it's miles ahead of the competition when it comes to camera software. The Pixel 3 and Pixel 3 XL are the current flagship Android smartphones from Google and have some of the best cameras in the business, largely thanks to some new developments to its Portrait mode algorithm and new features like Night Sight and Super Res Zoom.

Earlier this week, Gadgets 360 was privy to a media roundtable with Marc Levoy, a distinguished engineer at Google's research lab, where he talked in detail about how these new technologies were developed for the new smartphones.

In the Pixel 3 (Review) series, Google has revamped Portrait mode, as it moves away from stereo-based depth maps to a learning-based technique that uses machine learning to deliver more accurate edge detection and a more realistic background blur. Google has also released a blog post which details this but here's a quick summary of how it works and what's changed from Portrait mode on the Pixel 2 (Review).

Advertisement

Google Night Sight Is Here to Change Low-Light Photography

The Pixel 3 still has a single camera setup, like before, and continues to use the dual-pixels on the sensor to estimate a stereo depth map of most objects so it can separate the subject from the background, which gives you that bokeh or depth effect. However, this time, Google is also using machine learning for a more accurate segmentation of people from the background, for the rear camera. “It's a computational neural network that estimates the probability of a person at each pixel in the image,” says Levoy. The selfie camera relies on this new learning-based technique too, as it lacks the dual-pixel autofocus system.

Advertisement

A combination of this learning-based, neural network, and dual-pixel allows the new algorithm to deliver a more realistic blur. This mean, the blur on objects behind the human subject varies depending on their distance from the subject. Levoy further states that even though the final result might look pleasing, it's not an accurate representation of what a DSLR with a wide aperture lens might capture. The algorithm deliberately keeps a “zone of depth” around the person, just so things like the person's hands, hair or other elements on the person is also in sharp focus, which “makes it easier for novices to take pictures.”

The 'Frankenphone' (left) used to capture multiple perspectives and the generated depth map (right) Image credit: Google 

Advertisement

 

The thing about a neural network, is that while it's very efficient once it's up and running, it needs to be trained, which means feeding it with hundred and thousands of data sets first. To achieve this, Google built a specialised rig or a ‘Frankenphone' as it calls it, consisting of five Pixel 3 phones which individually captured the same shot but with slightly varying perspectives. This let Google capture high-quality depth maps for very photo taken, which was used to train the neural network.

Advertisement

Some of the other stand-out features in the Pixel 3 phones is Night Sight and Super Res Zoom. Night Sight uses the multi-frame exposure technique, which is the basis for HDR+, to give you cleaner and brighter night time photos. We've looked at this feature in great detail and also tested how it works on all three generations of Pixel smartphones, which you can read about here. Super Res Zoom is a feature exclusive to the Pixel 3 and Pixel 3 XL (Review), which improves the quality of digitally zoomed images. We covered this extensively in our full review of both phones and even in our camera comparison against the Samsung Galaxy Note 9 (Review) and Apple iPhone XS.

 

Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2026 hub.

Advertisement

Related Stories

Popular Mobile Brands
  1. Amazon Great Republic Day Sale 2026: iQOO Smartphone Deals Revealed
  2. No Doctors in Space: How NASA Handles Medical Emergencies on the ISS
  3. Webb Telescope Finds Rare Cosmic Dust in One of the Universe's Most Primitive Galaxies
  1. Vivo Y500i Launched With 7,200mAh Battery, 50-Megapixel Rear Camera: Price, Specifications
  2. Google Launches UCP Protocol Designed to Enable Direct Purchases Within Google Search
  3. Google Maps Audio Navigation Problems Could Affect Driver Safety, Make Navigation Confusing: Report
  4. Amazon Great Republic Day Sale 2026: iQOO Smartphone Deals Revealed
  5. James Webb Telescope Finds Rare Cosmic Dust in One of the Universe’s Most Primitive Galaxies
  6. NASA Spots Giant Antarctic Iceberg Turning Blue as It Nears Breakup
  7. No Doctors in Space: How NASA Handles Medical Emergencies on the ISS
  8. Rubin Observatory Discovers Fastest-Spinning Large Asteroid Ever Seen
  9. Physicists Deploy Quantum Sensors to Hunt the Universe’s Missing Matter
  10. Bha Bha Ba OTT Release Date: Everything You Need to Know About This Malayalam Comedy Thriller Film
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.