thus they have achieved less noise and more detail




Google has just explained in its AI Blog the new enhancements coming to your HDR +. These improvements come first for the latest Google Pixel 5 and Google Pixel 4a 5G, but it should not take too long to reach the latest versions of the Gcam, available for a good number of Android phones, without the need for them to be Pixel.



Although Google does not incorporate the latest sensors in its cameras (in fact, they have been recycling them for years), HDR + and processing are what give your camera the "magic". We are going to explain to you how they have managed to further improve the detail and noise of their photographs using a technique known as bracketing.






Google's HDR + is better than ever





One of the advantages that the photographic section of the Pixel depends mainly on the software is that there is always room for improvement with updates. Currently Google is one of the companies that works best on HDR + and, together with Apple, is the only one that shows it in the preview (before we take the photo). If you don't know what HDR is, we encourage you to read our in-depth explanation of how it works but, in very summary terms, is the fusion of photographs with different exposures to maximize dynamic range.





Google explains that it is difficult to make HDR photographs without noise, since this noise comes from the underexposed images. That an image is underexposed means that the signal that the photo will send to the sensor will be quite poor and noisy. In the same way, HDR on mobile is still a burst shot, since several photographs are taken at the same time. This burst also generates noise, making things even more complicated.



Tg Image 1935342068 These are all the photos that are taken when you shoot with the Google camera. If we wait half a second, the extra frame will be added in long exposure.


One of the solutions for this problem is the bracketing or bracketing, a technique that takes multiple photographs but in which a single parameter varies. In the case of Google, the parameter that varies is the exposure. After shooting in HDR +, Google adds a long exposure photograph after taking the picture.



Google has added an extra frame from a long exposure photograph. This is taken after processing the HDR + photograph, so we will have to leave the mobile still for half a second after taking the photo



The main drawback of this solution is that, to improve the photo, we will have to be still for half a second, so that extra frame can be merged with those of the HDR +.







Google Pixel 4a, analysis: once upon a time a spectacular camera attached to a small mid-range mobile






Long exposition Long exposure photograph on the right. HDR + photography and Night Sight on the left. Long exposure photo has much less noise and artifacts, so this additional information can be used to enhance the photo.


In other words, the merged images of the HDR + are added the extra information of a long exposure frame, which has much less noise and more information. This frame is fused with the final photograph, resulting in less noise, artifacts, and more detail.



This feature is starting to reach Pixels, so we hope it will be ported to other phones soon through the Gcam. Google continues to show that it dominates computational photography and that the Pixels, if they were already the best exponents in photography, still have room for improvement.



More information | Google