Apple working on optical image stabilization to create ‘super-resolution’ images

“On May 8, 2014, the US Patent & Trademark Office published a patent application from Apple that reveals a super-resolution camera engine for iDevices and beyond,” Jack Purcher reports for Patently Apple. “In camera shootouts against Samsung’s Galaxy S5, Apple’s iSight camera fares well. Now Apple’s super-resolution engine is out to vastly improve stabilizing shots, especially action and panoramic shots for higher clarity.”

“Apple’s invention generally relates to an iDevice camera system having a super-resolution based on an optical image stabilization mechanism to vary an optical path to a sensor of the camera.” Purcher reports. “Today on an iPhone 5s we have the option of turning on HDR. In the future you’ll have the option of turning on Super-Resolution as noted below. Apple states that ‘In some embodiments, the image capturing device must be set to create a super-resolution image. This setting may be activated in response to user input, such as by presenting to the user an option to create a super-resolution image.'”

Read more in the full article here.

7 Comments

    1. Super Resolution is an accepted term in imaging. It has *absolutely* nothing to do with optical image stabilization.

      I’ve been working on Super Resolution post processing both in the private sector and for certain U.S. Government agencies for about a dozen years or so. Super Resolution has to do with post processing TO TAKE ADVANTAGE OF IMAGE MOTION to obtain higher than focal plane resolution in post processed images.

      We’ve all seen in movies and in TV shows how the tech person takes a poor image and increases the resolution in the image by some magical processing method that only he/she can perform on their desktop or laptop whereby suddenly the analysts clearly can see a license plate number or a persons face when the original image had images that barely showed a license plate existed or a face was merely a few pixels. For the most part, this is pure fantasy.

      Real world Super Resolution (and its corollary technique called Data Cummulation) can’t go that far. You can’t make up data out of thin air. However, using pixel motion (typically based upon small motions of the camera itself), knowing the exact correlation between the airy disk and the pixel layout and fill factor, and aberrations from the true diffraction limited optics, e.g., coma effects and such — as well as interpolating to edge effects and such in the image itself — you can get to a higher resolution, post processed image.

      The article’s usage of the term is very, very different from the standard usage in the industry.

      0
      1
      1. While what you’re saying is interesting, Apple defines it differently. It’s not the “article” that has invented a term, it’s Apple’s own patent terminology and verbiage. Apple’s patent background spells out what the invention is set out to accomplish.

        You’re trying to usurp your view as the authority on this subject matter. Who cares what you think? it’s about what Apple has invented here whether it fits your little scenario or not.

        0
        1
        1. Calm down.

          It’s not me that has defined the term. It is not my “little scenario”. It is the community as a whole that uses sensors (not just cell phone cameras!) to produce imagery and useful data sets, whether that imagery/data is pretty pictures to eyeballs or a fusion of imagery across a broad set of wavelengths and even hyperspectral systems.

          Sure, Apple’s filings may have used the term differently, but that does not make it correct. You can claim your motion imagery is high definition TV (HDTV) even it it is only 320×240 at a frame a second if your pixel angular resolution is in the nanoradian range (leading to your claim that it is high definition) but that does NOT make it HDTV as that term is defined and recognized broadly by the industry.

          Besides, using both optical and digital image stabilization cannot produce the kind of imagery that true super resolution post processing can. That has been shown time and time again. You always run into the wall of physics. You’re always going to run into the limits imposed by optics.

          Fighting image motion (actually camera/sensor motion) gets you only part the way there. It cannot get you further. It may be a relatively inexpensive way to get slightly better images (but its not always inexpensive, e.g., the U.S. Air Force recently issued a contract for Hypertemporal Imaging that will employ such techniques as focal plane array shifting, focus adjustments as well as other forms of optical stabilization in order to get the crispest native images possible and that contract is about $33 million and five years for the first sensor). Pushing the sensor’s capabilities is good — up to a point. You get into diminishing returns rather rapidly unless you’re willing to spend huge sums, and even then there’s a limit.

          A more cost effective method, with greater possible effect and end results, is to allow minimal motion and then properly aggregate the information from multiple frames captured in rapid succession to do true Super Resolution as the industry as a whole defines it. Unfortunately, such processing requires teraflops of number crunching if you want to do it in anywhere near “real-time”. A very, very top of the line desktop could do it — maybe even the most tricked out MacPro (though I know of no one who has tried it), but it will be a few more generations before any cell phone has that level of processing capability.

          0
          1
  1. I love taking pictures with Apple’s panoramic feature. The fact that they’re out to improve this engine to stabilize and compensate for shaky hands while taking a photo is great. If you’re a camera tech head, you could always read the patent for technical fine points.

    0
    1

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.