You are using an older browser version. Please use a supported version for the best MSN experience.

Google developed its own mobile chip to help smartphones take better photos

Popular Science logo Popular Science 22/10/2017 Stan Horaczek

a close up of a black bag: The Google Pixel 2 camera skips the dual-lens setup implemented by many other manufacturers. © Stan Horaczek The Google Pixel 2 camera skips the dual-lens setup implemented by many other manufacturers. Back in the film photography days, different films produced distinct “looks”—say, light and airy or rich and contrasty. An experienced photographer could look at a shot and guess what kind of film it was on by looking at things like color, contrast, and grain. We don’t think about this much in the digital age; instead, we tend to think of raw digital files as neutral attempts to recreate what our eyeballs see. But, the reality is that smartphone cameras have intense amounts of processing work happening in the background. Engineers are responsible for guiding that tech to uphold an aesthetic. The new Google Pixel 2 phone uses unique algorithms and a dedicated image processor to give it its signature style.

The Pixel 2 camera was developed by a team of engineers who are also photographers, and they made subjective choices about how the smartphones photos should appear. The emphasis is on vibrant colors and high sharpness across the frame. “I could absolutely identify a Pixel 2 image just by looking at it,” says Isaac Reynolds, an imaging project manager on Google’s Pixel 2 development team. “I can usually look in the shadows and tell it came from our camera.”

On paper, the camera hardware in the Pixel 2 looks almost identical to what you’d find in the original, using a lens with the same coverage and a familiar resolution of 12-megapixels. But, smartphone photography is increasingly dependent on algorithms and the chipsets that implement them, so that’s where Google has focused a huge chunk of its efforts. In fact, Google baked a dedicated system-on-a-chip called Pixel Visual Core into the Pixel 2 to handle the heavy lifting required for imaging and machine learning processes.

a green plant in a forest: This photo came right out of the Pixel 2 camera. It has impressively vivid colors and the bright highlights in the background (referred to as specular highlights) are well-managed to keep things from getting blown out. This happens as a result of the HDR+ systems' multiple exposures.<br /><br /><br /><br /><br /><br /><br /><br /><br /> © Stan Horaczek This photo came right out of the Pixel 2 camera. It has impressively vivid colors and the bright highlights in the background (referred to as specular highlights) are well-managed to keep things from getting blown out. This happens as a result of the HDR+ systems' multiple exposures.








For users, the biggest addition to the Pixel 2’s photography experience is its new high-dynamic range tech, which is active on “99.9 percent” of the shots you’ll take, according to Reynolds. And while high-dynamic range photos aren’t new for smartphone cameras, the Pixel 2’s version, which is called HDR+, does it in an unusual way.

Every time you press the shutter on the Pixel 2, the camera takes up to 10 photos. If you’re familiar with typical HDR, you’d expect each photo to have a different exposure in order to optimize detail in the highlights and shadows. HDR+, however, takes images at the same exposure, allowing only for naturally-occurring variations like noise, splits them up into a grid, then compares and combines the images back into a single photo. Individually, the images would look dark to prevent highlights from blowing out, but the tones in the shadows are amplified to bring out detail. A machine learning algorithm recognizes and eliminates digital noise, which typically happens when you raise exposure in dark areas.

This all happens in a fraction of a second (the exact time varies depending on specific shooting conditions), and without the user even knowing about it. You don’t have to turn on HDR+. It’s just the way the camera works.

The processing power for all of this comes from the phone's main hardware, but will eventually stem from something totally new for Google, in the form of the Pixel Visual Core. It’s a dedicated mobile system-on-a-chip that’s currently built into Pixel 2 phones, but dormant, to be turned on via software update down the line. By offloading that work from the main processor, the Pixel 2 is five times quicker and 10 times more power efficient at crunching a photo than it would be otherwise. Google basically put a smaller computer inside the smartphone, specifically to handle this kind of picture processing work.

All of this is necessary because of camera hardware limitations inside a typical smartphone. “We’d love to have a full-frame sensor in there,” said Reynolds, referring to the direct relationship that often exists between the size of an imaging sensor and its low-light performance. “But, that big of a sensor would take up 40 percent of the phone body as it is.”

a clear blue sky: This shot was taken using the native Android camera app. The extremely blue sky is something the Pixel 2 team worked extensively to achieve. Also take notice of the shadows on the pole, which are dark, but still retain detail.<br /><br /><br /> © Stan Horaczek This shot was taken using the native Android camera app. The extremely blue sky is something the Pixel 2 team worked extensively to achieve. Also take notice of the shadows on the pole, which are dark, but still retain detail.


a clear blue sky: This image was shot with Lightroom Mobile, which currently only gives single photos taken from the camera. It's a raw file (DNG format), and there are some noticeable difference. The sky is clearly lighter in color and has some noticeable noise or artifacts, even though it was shot at ISO 53. Each DNG file also clocks in at roughly 23MB, so they take up considerably more space than the finished JPEGs.<br /><br /><br />&nbsp;<br /> © Stan Horaczek This image was shot with Lightroom Mobile, which currently only gives single photos taken from the camera. It's a raw file (DNG format), and there are some noticeable difference. The sky is clearly lighter in color and has some noticeable noise or artifacts, even though it was shot at ISO 53. Each DNG file also clocks in at roughly 23MB, so they take up considerably more space than the finished JPEGs.


 

Right now, HDR+ is only available within the native Android camera app. If you use a third-party program like Lightroom or Camera +, you can actually see the difference between a single shot and one that’s compiled from multiple captures. The difference, as you might expect, is particularly evident in the shadows as you can see above.

Google is planning to open up the platform to third-party developers, however, so others can take advantage of the extra computing power.

This move toward computational cameras that create images beyond what a typical camera could ever capture isn’t likely to slow down in the smartphone camera world, either. “People have come to expect a smartphone camera to take a picture that matches the scene they see with their eyes,” said Reynolds. You can already see the effects of computational photography in things like panorama modes that seamlessly stitch multiple images together.

Users also now expect smartphones to mimic more advanced cameras with all that processing power. Portrait modes that fake blur around a central subject are common on just about every platform, but rather than adding a specific portrait camera, Google has opted for a single rear-facing imaging device, letting machine learning handle the rest. “To get optical zoom or a telephoto lens, you need a bump on the back of the camera,” says Reynolds. "We could get the result we wanted with one camera.”

a close up of text on a wooden surface: The Pixel Visual Core is a system-on-a-chip, including its own processor and RAM. It's dormant now, but will be turned on and opened up to third parties down the road to enable HDR+ in outside apps. © Google The Pixel Visual Core is a system-on-a-chip, including its own processor and RAM. It's dormant now, but will be turned on and opened up to third parties down the road to enable HDR+ in outside apps. Lastly, cameras also now serve more than one purpose, so the hardware needs to reflect that. Google Lens—a service that lets you point your phone at a landmark or an object to learn more about it—has different imaging requirements for capturing and recognizing objects in the real world; and augmented reality apps are similarly demanding, requiring high refresh rates and full-frame capture from the sensor.

So, while the camera specs haven’t changed much on paper, resulting images have drastically changed. If the trend continues, though, those changes will be harder and harder for the user to notice, which isn’t by accident. In fact, Google’s recently-announced Clip camera is designed to take almost all decision making out of capturing photos and video—including edits. As the machines continue learning, they never become better photographers than humans, but they very well could shape our idea of what exactly a good photo really is.

More From Popular Science

Popular Science
Popular Science
image beaconimage beaconimage beacon