Apple interested in Sony’s next-gen 3D camera sensors

“Sony Corp., the biggest maker of camera chips used in smartphones, is boosting production of next-generation 3D sensors after getting interest from customers including Apple Inc.,” Yuji Nakamura and Yuki Furukawa report for Bloomberg. “The chips will power front- and rear-facing 3D cameras of models from several smartphone makers in 2019, with Sony kicking off mass production in late summer to meet demand, according to Satoshi Yoshihara, head of Sony’s sensor division.”

“‘Cameras revolutionized phones, and based on what I’ve seen, I have the same expectation for 3D,’ said Yoshihara, who has worked for more than a decade on wider industry adoption of cameras in smartphones,” Nakamura and Furukawa report. “Sony isn’t the only maker of 3D chips, with rivals Lumentum Holdings Inc. and STMicroelectronics NV already finding uses for them, such as unlocking phones through facial recognition or measuring depth to improve focus when taking pictures at night.”

“Yoshihara said Sony’s technology differs from the ‘structured light’ approach of existing chips which have limits in terms of accuracy and distance. Sony uses a method called ‘time of flight’ that sends out invisible laser pulses and measures how long they take to bounce back, which he said creates more detailed 3D models and works at distances of five meters,” Nakamura and Furukawa report. “Yoshihara also said there will only be a need for two 3D chips on devices, for the front and back, despite a trend by smartphone makers to have three or more cameras.”

Read more in the full article here.

MacDailyNews Note: In September, Yoko Kubota reported for The Wall Street Journal that Apple’s TrueDepth Camera system consists of two components, “known as Romeo and Juliet among Apple engineers and suppliers… The Romeo module features a dot projector that uses a laser to beam 30,000 infrared dots across the user’s face, essentially mapping its unique characteristics. The Juliet module includes the infrared camera that reads that pattern… The Romeo module is assembled by LG Innotek and Sharp Corp.”

9 Comments

      1. Yes ‘ @Who’ i most certainly am aware they outsource.. Apple does not manufacture anything themselves directly…but they invent/innovate a lot of their proprietary technology.

        My curiosity was what their interest in Sony’s technology would be since they already have their own 3d true depth cam technology. …

        Even more interesting is that i asked a question in my post…since the article does not realy explain in detail ….and I got down votes but not one explanation. Silly….Very !

        Anyway for those of you who are curios as i was …here is a bit more clarity i found:

        Apple’s existing 3D hardware, TrueDepth, uses a single vertical-cavity surface-emitting laser (VCSEL) to project structured light — a grid of dots — onto a subject. By measuring deviations and distortions in the grid, the system is able to generate a 3D maps that is in this case used for biometric authentication.

        Sony’s technology, on the other hand, is a time of flight (TOF) system that creates depth maps by measuring the time it takes pulses of light to travel to and from a target surface. According to Yoshihara, TOF technology is more accurate than structured light and can operate at longer distances.

        1. I wouldn’t say they mean nothing. Liars, belligerent twats, and fact-challenged people with strong unsubstantiated opinions who never post their references tend to be voted down a lot, with good cause. My guess is that the sheer volume of fanboy posts by Yojimbo has worn thin.

          To answer the question and to remind the loyal fanboys that Apple isn’t always ahead everywhere:

          To the experienced photographer, Apple has always relied a bit too much on software and gimmicks in its iOS cameras. It’s not that the cameras were always inferior, but there has almost always been a superior camera in a competing phone for little or no higher cost. Of course if photography is your thing, you shouldn’t be relying on a cell phone to snap pics anyway.

          Examples:

          In 2012, the iPhone 5 had an 8 mp sensor f/2.4 lens. The Samsung Galaxy S3 used a 12% larger sensor and an f/2.6 lens. Jony just couldn’t allow his design to have a thicker body.

          In 2013, Apple decided to bump the 5S to an f/2.2 lens — but still 8 mp. Samsung rolled out a Sony IMX135 camera in its S4, 13 mp with an f/2.2 lens.

          In 2014, Apple used the same camera hardware for its “all-new” blockbuster iPhone 6 although it did add on-chip autofocus. Samsung again shamed Apple with its Galaxy S5, installing a new 16 mp sensor with backside illumination with about 34% more sensor area than the iPhone 6.

          In 2015, Apple finally updated its camera to double digit megapixels (not that iCloud could actually handle big files well), now 12 mp. It still retained an f/2.2 lens. The Samsung S6 used the same sensor as before (with 1.5 times the pixel count) but again jumped ahead of Apple with a superior f/1.9 lens and optical stabilization.

          In 2016, Apple’s iPhone 7 added optical image stabilization and an f/1.8 lens — and they still were behind Samsung, which improved to an f/1.7 lens, dual pixel autofocus, OIS, and a new sensor.

          In 2017 for the iPhone 7S (which Apple inexplicably renamed the 8), Apple bumped up to a bigger sensor and an f/1.8 lens. The exciting $1000 iPhone Xpensive had a different camera, better sensor, still 12 mp, still with an f/1.8 lens. Samsung’s S8 was unchanged and still had better specs.

          In 2018 Apple’s flagship XS finally got the same quality sensor as the Sammy Galaxy, but the Galaxy S9 has a superior lens at f/1.5. The photo quality is probably as close as Apple has ever gotten. But you have to experience the biggest Apple sticker shock ever to get it.

          Of course the Apple apologist will legitimize the fact that Apple’s camera hardware technical specs have ALWAYS trailed the flagship Samsung phones. And I’m not extolling Samsung, that is just to name one key competitor; Nokia and others have rolled out some impressive cameras in phones too. The Huawei P20 Pro has a significantly larger photo sensor and can be configured up to 40 mp. It also has a 20 megapixel monochrome sensor for good measure.

          HERE IS NO ACCEPTABLE JUSTIFICATION to continually treat customers with lesser hardware while charging more money. That’s Apple’s greed. Apple managers think that the Apple brand name is worth premium pricing even though Apple hasn’t kept pace with hardware for 4 years now.

          There is no software voodoo “Neural Engine” that Apple implements that cannot be done in post processing of files grabbed using any other picture taker. The real magic starts with a great lens and a great photo sensor. By objective measure, Apple is a poorer value.

          Will the average user notice or care? Of course not. That’s what Apple has been banking on all this time. “Good enough” is how Cook makes money. Well, until Apple priced itself out of the mobile phone market like they did with Macs.

  1. From the article above
    “Yoshihara said Sony’s technology differs from the ‘structured light’ approach of existing chips which have limits in terms of accuracy and distance. Sony uses a method called ‘time of flight’ that sends out invisible laser pulses and measures how long they take to bounce back, which he said creates more detailed 3D models and works at distances of five meters,”

    Having an accurate and precise grading on each pixel the first 5 meters away from the camera would be good, BUT you’d still need two lenses to use parallax depth capture for objects further away than 16 feet. (Apple applies bokeh strength based on distance to better match reality)

    Could be very useful for near field AR usage, though.

    1. If you’re talking about 2 lenses for stereoscopic (depth) perception, then you have it backwards. Having two eyes (or lenses) a few cm apart works great up close when the distance between the eyes is large relative to the distance of what you’re looking at. But, as your focal point gets further away, the difference between the images on the 2 retinas decreases. Beyond a few meters, stereoscopic depth is negligible, and people rely on purely monocular depth cues. Parallax is one of those monocular depth cues – it requires only one eye, relying on the relative motions between the observer and distant objects.

  2. It will be interesting if Sony can pull this off in a single chip based purely on time-of-arrival of the light.

    Apple’s implementation with IR “dots” is also based upon time-of-arrival of the IR dot pattern, but is simpler to implement than what Sony seems to be proposing.

    Just remember that light travels about a meter at 3.34 nanoseconds, or about 3.34 picoseconds per millimeter. In order to accurately do face recognition you need to get to sub millimeter time scales which means sub picosecond time scales.

    It’s just easier to control those time scales with predefined dot patterns than generic imaging chips looking at a broadly projected laser beam and accurately measuring time-of-arrival of that reflected laser IR light across a focal plane array that’s primary purpose is imaging.

    The read out rate of that FPA will have to be on the order of a picosecond or less in order to read time-of-arrival differences accurately. I guess it could be done with a rolling shutter on the FPA as long as the roll out could be very precisely defined.

    It will be interesting to see what actually get into true mass production (say, 10s of millions of units per month).

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.