After an on-stage preview at a September media event, a new iPhone 11 feature called Deep Fusion arrived in preview form yesterday with the first beta of iOS 13.2. Deep Fusion is Apple’s name for a new machine learning-aided computational photography trick iPhone 11 models can apply on the fly to enhance detail.
While the DSLR snaps one photo in a split second, A13 Bionic-powered iPhone 11-series cameras will snap three, five, or seven, using tricks like beginning to shoot before the shutter button is pressed and shooting multiple exposures so fast that DSLRs can’t keep up. And while traditional photographers wrestle with questions over the integrity of composite images, Apple AI will pick the best parts from a stack of them and turn them into one idealized “photo” in the time it takes you to blink your eye…
You literally flip a switch — in iOS 13.2 beta 1, confusingly labeled “Photos Capture Outside the Frame” — to turn the feature on or off. If the switch is off, Deep Fusion is on, which means you’re giving the iPhone permission to extract maximum detail from a series of immediately related photos and deliver that final image to you as your photograph.
MacDailyNews Take: Even in beta, Deep Fusion works and, as Jeremy writes, most of us will want to leave it on as it’ll help out when it needs to and, unlike some other things like Instagram filters or Google’s rather awful Night Shit, it doesn’t seem to ever hurt image quality.
Interns, TTK! Prost, everyone!
Prost from Oktoberfest! It’s great to be in Munich! 🍺🎡🥨 pic.twitter.com/vu7cciKPj9
— Tim Cook (@tim_cook) September 29, 2019