Unlike Apple’s older iPhone XR, the new iPhone SE’s single camera generates bokeh entirely via machine learning. iPhone SE features the best single-camera system ever in an iPhone with a 12-megapixel f/1.8 aperture Wide camera, and uses the image signal processor and Neural Engine of A13 Bionic to unlock even more benefits of computational photography, including Portrait mode, all six Portrait Lighting effects and Depth Control. Using machine learning and monocular depth estimation, iPhone SE also takes stunning Portraits with the front camera. Next-generation Smart HDR comes to iPhone SE, intelligently re-lighting recognized subjects in a frame for more natural-looking images with stunning highlight and shadow details.
This iPhone goes where no iPhone has gone before with “Single Image Monocular Depth Estimation.” In English, this is the first iPhone that can generate a portrait effect using nothing but a single, 2D image…
While the iPhone XR has a single camera, it still obtained depth information through hardware. It tapped into the sensor’s focus pixels, which you can think of as tiny pairs of ‘eyes’ designed to help with focus. The XR uses the very slight differences seen out of each eye to generate a very rough depth map.
The new iPhone SE can’t use focus pixels, because its older sensor doesn’t have enough coverage. Instead, it generates depth entirely through machine learning.
MacDailyNews Take: As Sandofsky writes, if you like to take Portrait mode photos of your dog, for example, you’ll want to use Halide because Apple’s first-party Camera app only enables depth when there are humans in the photo.
There’s tons more info and photo samples in full article.