Nick Statt and Cameron Faulkner for The Verge:
Deep Fusion, Apple’s anticipated computational photography system, is live in the latest iOS 13 public beta.
Deep Fusion is designed to use artificial intelligence and other software tricks to improve the sharpness of images by capturing frames of differing exposures and merging them on its own.
The goal is to produce the highest-quality image possible. It’s supposed to only work for medium to low light scenes, whereas Smart HDR and Night mode handle extremely bright and extremely dark scenes, respectively
MacDailyNews Take: Here are some early examples:
Very first tests of #DeepFusion on the #iPhone11 pic.twitter.com/TbdhvgJFB2
— Tyler Stalman (@stalman) October 2, 2019
Very first tests of #DeepFusion on the #iPhone11 pic.twitter.com/TbdhvgJFB2
— Tyler Stalman (@stalman) October 2, 2019
iPhone Deep Fusion thread.#deepfusion #iOS13 #iPhone11ProMax pic.twitter.com/5yCgOPVtGx
— snowflex (@snowflex4) October 3, 2019
I’m not seeing the ‘astounding’ part. Anyone?
Did you actually look at the tweeted examples? This isn’t some sharpening algo.
Click on the Twitter pictures and a higher resolution version will open.
I hope it’s adjustable. Like some HDR, it’s dialup way up in these photos, a little too far imo.
Ah, my vision isn’t good enough to tell much of a difference. Senior citizen eyes. Do most users even care about this. If it works, I’m happy Apple has found a way to improve photos for users, but I personally wouldn’t find it necessary as a non-professional viewer. Both look pretty good to me.