Site icon MacDailyNews

Open letter against Apple’s privacy-invasive content scanning tech gathers thousands of signatures

An open letter dated August 6, 2021 that’s against Apple’s privacy-invasive content scanning technology in which security & privacy experts, cryptographers, researchers, professors, legal experts, Apple consumers, and others decry Apple’s planned move to undermine user privacy and end-to-end encryption has already gathered thousands of signatures.

Apple Privacy Letter:

Dear Apple,

On August 5th, 2021, Apple Inc. announced new technological measures meant to apply across virtually all of its devices under the umbrella of “Expanded Protections for Children”. While child exploitation is a serious problem, and while efforts to combat it are almost unquestionably well-intentioned, Apple’s proposal introduces a backdoor that threatens to undermine fundamental privacy protections for all users of Apple products.

Apple’s proposed technology works by continuously monitoring photos saved or shared on the user’s iPhone, iPad, or Mac. One system detects if a certain number of objectionable photos is detected in iCloud storage and alerts the authorities. Another notifies a child’s parents if iMessage is used to send or receive photos that a machine learning algorithm considers to contain nudity.

Because both checks are performed on the user’s device, they have the potential to bypass any end-to-end encryption that would otherwise safeguard the user’s privacy.

Immediately after Apple’s announcement, experts around the world sounded the alarm on how Apple’s proposed measures could turn every iPhone into a device that is continuously scanning all photos and messages that pass through it in order to report any objectionable content to law enforcement, setting a precedent where our personal devices become a radical new tool for invasive surveillance, with little oversight to prevent eventual abuse and unreasonable expansion of the scope of surveillance.

The Electronic Frontier Foundation has said that “Apple is opening the door to broader abuses”:

It’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses […] That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.

The Center for Democracy and Technology has said that it is “deeply concerned that Apple’s changes in fact create new risks to children and all users, and mark a significant departure from long-held privacy and security protocols”:

Apple is replacing its industry-standard end-to-end encrypted messaging system with an infrastructure for surveillance and censorship, which will be vulnerable to abuse and scope-creep not only in the U.S., but around the world,” says Greg Nojeim, Co-Director of CDT’s Security & Surveillance Project. “Apple should abandon these changes and restore its users’ faith in the security and integrity of their data on Apple devices and services.

…The Electronic Frontier Foundation’s statement on the issue supports the above concern with additional examples on how Apple’s proposed technology could lead to global abuse:

Take the example of India, where recently passed rules include dangerous requirements for platforms to identify the origins of messages and pre-screen content. New laws in Ethiopia requiring content takedowns of “misinformation” in 24 hours may apply to messaging services. And many other countries—often those with authoritarian governments—have passed similar laws. Apple’s changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.

Furthermore, the Electronic Frontier Foundation insists that it’s already seen this mission creep in action: “one of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society.”

Fundamental design flaws in Apple’s proposed approach have also been pointed out by experts, who have claimed that “Apple can trivially use different media fingerprinting datasets for each user. For one user it could be child abuse, for another it could be a much broader category”, thereby enabling selective content tracking for targeted users.

The type of technology that Apple is proposing for its child protection measures depends on an expandable infrastructure that can’t be monitored or technically limited. Experts have repeatedly warned that the problem isn’t just privacy, but also the lack of accountability, technical barriers to expansion, and lack of analysis or even acknowledgement of the potential for errors and false positives…

Our Request

We, the undersigned, ask that:

1. Apple Inc.’s deployment of its proposed content monitoring technology is halted immediately.
2. Apple Inc. issue a statement reaffirming their commitment to end-to-end encryption and to user privacy.

Apple’s current path threatens to undermine decades of work by technologists, academics and policy advocates towards strong privacy-preserving measures being the norm across a majority of consumer electronic devices and use cases. We ask that Apple reconsider its technology rollout, lest it undo that important work.

MacDailyNews Take: Read the full open letter, which we have signed, here.

Exit mobile version