Researchers: Apple’s delayed backdoor surveillance scheme doesn’t even work

A team of researchers at the Imperial College in London have presented a simple method to evade detection by image content scanning mechanisms, such as Apple’s currently delayed backdoor surveillance scheme to scan users’ photo libraries, ostensibly for Child Sexual Abuse Material (CSAM), but which could easily be bastardized to scan for political images, words, etc. wouldn’t even work if Apple foolishly implements the travesty.

Researchers:t Apple's delayed backdoor surveillance scheme doesn't even work

Bill Toulas for BleepingComputer:

A team of researchers at the Imperial College in London have presented a simple method to evade detection by image content scanning mechanisms, such as Apple’s CSAM.

CSAM (Child Sexual Abuse Material) was a controversial proposal submitted by Apple earlier this year. The proposal was eventually retracted in September, following strong backlash from customers, advocacy groups, and researchers.

The main idea is to compare image hashes (IDs) of pictures shared privately between iOS users to a database of hashes provided by NCMEC and other child safety organizations.

If a match is found, Apple’s reviewers will look into the content and alert the authorities of the distribution of child abuse and pornography, all without compromising the privacy of people who share legal images (non-matches).

This theoretically sounds like a good system to prevent the dissemination of harmful material, but practically, it inevitably opens a “Pandora’s box” for mass surveillance.

However, the question that researchers at the Imperial College in London asked is, would such a detection system even work reliably in the first place?

The research presented at the recent USENIX Security Symposium by British researchers shows that neither Apple’s CSAM nor any system of this type would effectively detect illegal material.

As the researchers explain, it’s possible to fool content detection algorithms 99.9% of the time without visually changing the images.

MacDailyNews Take: Apple must abandon, not just delay, its ill-conceived, fundamentally-flawed backdoor surveillance scheme.

“Our results shed strong doubt on the robustness to adversarial black-box attacks of perceptual hashing-based client-side scanning as currently proposed. The detection thresholds necessary to make the attack harder are likely to be very large, probably requiring more than one billion images to be wrongly flagged daily, raising strong privacy concerns.” — Department of Computing and Data Science Institute, Imperial College London

Just as treating every citizen as a criminal is bad governance, treating every customer as a criminal is bad business, Apple.

Read more: Adversarial Detection Avoidance Attacks: Evaluating the robustness of perceptual hashing-based client-side scanning

Please help support MacDailyNews. Click or tap here to support our independent tech blog. Thank you!

6 Comments

  1. Why was this crime made the focus? It would seem technically feasible that scanning for any number of crimes would be entirely possible? Maybe b/c for most, hearing “for the children” is hard to NOT permit, while the drag-net could easily come across other/any incriminating info?

    Beyond the basic “backdoor” concern, something doesn’t smell good.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.