In his last months, Steve Jobs met with light field camera-maker Lytro CEO

“In his final months, as the upcoming book Inside Apple by Adam Lashinsky explained, [Apple CEO Steve] Jobs made an effort to meet with Ren Ng, a Stanford graduate and the CEO of a photography company Lytro,” Mark Gurman reports for 9to5 Mac. “After it became known to Ng that Jobs wanted to meet, Ng rushed to Jobs’ Palo Alto home to discuss product design and photography. According to Inside Apple:”

The company’s CEO, Ren Ng, a brilliant computer scientist with a PhD from Stanford, immediately called Jobs, who picked up the phone and quickly said, “if you’re free this afternoon maybe we would could get together.” Ng, who is thirty-two, hurried to Palo Alto, showed Jobs a demo of Lytro’s technology, discussed cameras and product design with him, and, at Jobs’s request, agreed to send him an email outlining three things he’d like Lytro to do with Apple.

Gurman reports, “Instead of working in a single-pane fashion —like most cameras today— Lytro’s technology is actually able to intake an entire light-field at one time… [Lytro’s cameras can] take photos without focusing on a particular object. The images that the Lytro camera takes can be focused after the fact.”

Read more in the full article here.

MacDailyNews Take: Last year, we were excited when we first heard of Lytro’s cameras, too. However, after zooming into Lytro’s sample images, we were less excited by the rather obvious artifacts. From what we can see, the tech has a way to go.

[Thanks to MacDailyNews Readers “Fred Mertz” and “Lynn Weiler” for the heads up.]

24 Comments

  1. The news in this, for me, Jobs, like Napoleon before him, had an active mind that refused to give up while his body was failing him. A sad loss for the world, but at least the Universe had aligned long enough for him to punch a hole or a few for us to follow on.

  2. I miss Steve already. The light in tech is just a lot dimmer and less satisfying, like a parent who has passed on and now life feels less than before. In some ways the fun of new stuff was in fact sharing it with Steve together and the mutual delight in that. I really hope Jonny Ives steps up to do the most important presentations of hardware. He’s far closer to Jobs in spirit than Tim Cook could ever be, even though I am most appreciative of Cook’s contribution to Apple.

    1. There’s a bit of Steve in all of us, that’s why we’re here.

      Many of us share an affinity with the man, the myth, and his creations. The moment we pick one up and turn it over in our hands, we smile a knowing smile, the same way he did when it was handed to him.

      I can imagine Sir Ive feeling a bit lost without a guru in Steven P Jobs.

  3. Napoleon? Interesting concept. This technology will eventually come into its own but for any given image it wont be as good as a perfectly taken standard image. Of course that is true of digital over analogue too but once the information available hits a certain point the inherent deficiency becomes almost undetectable. So well worth planning for that time.

  4. There are artifacts even on these relatively low-res sample photos, but the promise of this technology is indeed impressive. This is like capturing RAW images and manipulating exposure, etc after the fact, except here it’s altering the focus too.

    Like any new tech, it’ll take some time to mature, refine, and improve. After all, it took consumer digital cameras a good 10 years to catch up to what consumer film cameras could capture in terms of quality and resolution, even though digicams offered some major advantages over film.

    1. One feature I’d like to see is, manual focus using one’s finger once the picture is taken.

      The focus feature would be sensitive to levels of pressure over time. Also useful would be multiple areas of focus applied after the fact.

      Applying pressure over time brings the area under your finger into focus; depth of field would be an incidental result. Releasing pressure and then tapping would apply incremental changes to the focus, like Unsharp Mask.

      Multi-Touch gestures would yield similar focus features over separate areas of the photo. Likewise, swiping slowly back and forth would bring wider areas into focus.

      1. What you’re talking about is just software. The data is all already there within the images – you’re just talking about different user interfaces for dealing with the images.

        Lytro will get there – I’m sure that eventually you’ll be able to do everything you’re suggesting.

        1. I know what it is I’m talking about. How else could we manipulate the data once the shot was taken?

          What I’m not talking about is “different user interfaces”. I’m only talking about one, the Multi-Touch interface, and more precisely, the ‘how of it’.

    2. Kodak. Busted today, death by digital camera. Which it owned the patents for. 20 years before others took the idea and refined and improved on it (much as Big Fruit does year after year).
      Xerox, PARC. That story is well enough known here.
      It’s odd that MDN doesn’t see this story in context. The guy’s vision was the bedrock of his success.

    1. Apple can’t bend physics! Only time will tell whether Lytro’s tech will yield the fruit of the promise.

      Perhaps SJ saw something familiar and Apple holds the other pieces to the puzzle and the collaboration will cement the third leg in place, providing a sturdy platform to support Apple and the Tech industry for another ten-years.

      1. There’s no need to bend physics.

        Lytro’s camera is a light field camera. Very basically, it has a conventional CCD chip, just like in any other digital camera, but immediately in front of that CCD is a massive array of very tiny lenses. There’s no focussing within the camera itself – the image that gets captured contains the entire light field. In contrast a regular camera captures a single plane of a light field. Software takes the captured light field images and processes them, letting you focus after the image has been taken.

        Each tiny lens in the micro-lens array sits above several CCD pixels – I’m not sure how many exactly, but it’s maybe a 4×4 set of pixels, i.e. 16 of them, or maybe it’s 5×5. The end result of this is that to get a 1 megapixel image out after processing you’d need a 16 (or 25) megapixel sensor.

        It’s smart stuff – as well as being able to refocus an image you can shift your perspective on the image around a little (for close objects), and even potentially create stereoscopic 3D images from a single light-field. Ren Ng’s original papers on this from Stanford are available online.

        So, as I said, no need to bend physics – the physics is all understood, and has been for quite some time. It’s really a matter of dealing with engineering problems, i.e. making the micro-lenses, large CCDs, and the software.

      2. G4Dualie,
        Obviously you have not been the subject of the Reality Distortion Field … a great deal of those who say something is not possible are the ones who cannot or do not know how to do things… Please don’t repeat what you said to anyone… it is very discouraging… in fact don’t repeat this to yourself either… I am sure you can do a lot of things and you have stopped yourself just because you think it is not possible.

  5. If these pictures were taken by an Apple branded technology MDN would be wetting themselves with delight. Maybe MDN can leave our their ‘Take’ and let us just enjoy the news?

  6. Anybody remember The Apple Quicktake 100?
    These teenyboppers haven’t got a CLUE about how lucky they are to have such advanced technology at such a low cost. 1400 baud modem? WHA? What’s that? Pff

  7. Picking a winner after they are already a winner is a no-brainer.

    The trick is to find “diamonds in the rough”. You have to be a forward -thinking individual to seek out new theories and concepts to change the way we interact with each other and enjoy the world around us. Some times great theories need to be paired with great minds and synergize (hate that cliche).

  8. “From what we can see, the tech has a way to go.”

    Lighten up a bit MDN. The photos are from a pre release camera. This is very cool new technology, that probably has a bright future. At this point a few few artifacts are hardly a big deal. As Steve used to say to critical journalists “what have you created?”

  9. “However, after zooming into Lytro’s sample images, we were less excited by the rather obvious artifacts.”

    This tech can find a niche even with that (think Instagram), but the price has to seriously come down. Why would a hipster pay $399 when any smart phone camera gets the quality they want already?

  10. You folks miss the big point with Lytro’s cameras. It’s not just you can selectively focus after the fact, it’s that you can also get EVERYTHING in focus at the same time. Foreground, background, everything. It means you can shoot with the equivalent of a wide-open lens while getting the result as if you shot with the lens closed all the way down.

  11. Lytro may have a way to go to refine their product, but their concept *is* the future and Steve was quick to recognize that (as usual). One day all cameras will offer after-the-fact focussing.

    The main drawback with 3D imaging tech today is that we cannot simply focus our eyes on background images (as we can naturally) because the focal range of today’s cameras make them out of focus. Lytro’s tech will be more important as 3D becomes more common place, allowing directors to choose the focal point with a single camera rather than needing two and *without* the foreground or background looking fuzzy.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.