Apple autonomous vehicle photographed with new sensor array

New photos of Apple’s autonomous vehicle (via The Last Driver License Holder)
New photos of Apple’s autonomous vehicle (via The Last Driver License Holder)

The Last Driver License Holder has posted photos of a new sensor configuration atop an Apple autonomous vehicle.

Mario Herger:

After the first video of the new sensor configuration of Apple’s autonomous cars surfaced, we got new pictures from a closer perspective. Our readers Jens & Katharina Lehmann spotted the vehicle in Cupertino, California.

9to5Mac’s Ben Lovejoy speculates that this might be a newer test setup of an all-camera system that excludes LiDAR. Apple’s self-driving cars have so far used a mix of sensors, which the company has said includes LiDAR, radar, and cameras.

Tesla, however, is convinced that all-camera setups are the future of self-driving cars. These are less prone to interference from rain and snow, and can also read signs to provide a more detailed understanding of the environment. Tesla CEO Elon Musk has even said that any manufacturer relying on LiDAR for self-driving cars is doomed.

MacDailyNews Take: There could easily still be LiDAR in that array. In fact, it makes more sense to include all sensors in your test package. That way you can simply turn off LiDAR if you want to test and all-camera system.

Check out all of the new photos here.

3 Comments

  1. Totally agree with MDN in this regard just because Musk says something does not the future make. The fact that the Military are commited to the technology whether it be in conjunction with or as the prime sensor in some cases says to me it would be foolish, simplistic and certainly premature to make such black and white assumptions. That said it will be fascinating to see how the development of all this technology pans out eventually though I suspect not any time soon.

    I may be off target here but this looks to me to be an early experiment into conceptually how this technology can be brought together and integrated into the actual structure of a vehicle without damaging its affective mess, thus allowing the designers a far freer hand in its eventual form and shape. It’s not to difficult to imagine how what is now an independent module strapped on top might eventually blend into the vehicle’s structure far more seamlessly.

  2. I worked with lidar sensors and studied them for years – pulse and continuous wave, linear mode and Geiger mode, scanning and flash. They typically operate in the near infrared part of the spectrum, from 1 to 2 micron wavelength. As a result, small particles such as dust that are on the order of those wavelengths will strongly interfere with lidar sensors. That is physics. In addition, water has a strong absorption band just under 1 micron. Geiger mode sensors are sensitive to single photons and can be used in photon-starved environments using massive over sampling to make measurements where linear mode sensors fail.

    When navigating, you want every source of position and velocity and linear/angular acceleration data available – GPS, Linder, optical, radar, inertial measurement units, gyros, etc. The navigation filter will de-weight outliers to arrive at a best estimate in real-time. In poor weather, the lidar sensors will be de-weighted in favor of other measurements. But you have to keep in mind that optical cameras do not directly measure distance, velocity, or angles. Those are all estimated using different software techniques. Optical flow, for example, can be used to estimate velocity by tracking the shifting of features from frame to frame. Distance can be inferred using scaling or known references, or by stereoscopic techniques.

    When they are in a suitable environment, however, lidar sensors are superior in terms of accuracy and precision. A radar sensor (GHz frequency) might have a velocity precision of 15 to 20 cm/s while a lidar sensor (THz) can be better than 1 cm/sec. In addition, lidar beams have very low divergence compared to radio frequencies.

    If I were testing a navigation system, I would include many types of sensors, then post-process that data in different ways to learn how a given sensor suite performs under different conditions. Even if Apple or Google decides to use only optical cameras, collecting lidar data along with the optical camera data will aid in refining the camera algorithms.

    No sensor type is perfect. Radar suffers from multi-path returns. Lidar can suffer from dust/weather and surface type (specular versus scattering). Optical cameras can be blinded or run into contrast issues, and do not directly measure distance or velocity. Nothing is perfect.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.