Apple Has Betrayed Its Privacy Legacy – and Will Undermine End-to-end Encryption Everywhere

Posted on Sep 3, 2021 by Glyn Moody

Apple is a company that has always made much of its commitment to privacy, and has succeeded in turning it into a unique selling point of its products. That proud history made a recent announcement all the more shocking. Nobody could deny that Apple’s Expanded Protections for Children are motivated by the best intentions, and are tackling a terrible problem. But as commentator after commentator pointed out, in this case, in its eagerness to come up with new ways of protecting children from harmful content and online predators, Apple seems to have missed the bigger picture.

There are three elements to the Apple’s new initiative. One – “updates to Siri and Search provide parents and children expanded information and help if they encounter unsafe situations” – is unproblematic. The other two are not. Here’s what Apple intends to do to ensure “communication safety in Messages”:

The Messages app will add new tools to warn children and their parents when receiving or sending sexually explicit photos.

When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo. As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it. Similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it.

As over 90 organizations wrote in an open letter to Apple’s CEO, the problem here is that this assumes a benevolent relationship between parents and child. Clearly, that’s not always true, in which case Apple’s new alert system could enhance the abusive power of adults over a child. “LGBTQ+ youths on family accounts with unsympathetic parents are particularly at risk”, the letter pointed out. But it is the third element that has rightly caused most concern in privacy and security circles. It tries to address undoubtedly one of the worst problems online today: the spread of Child Sexual Abuse Material (CSAM). Apple wants to detect CSAM images stored in iCloud Photos. Here’s how:

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC [National Center for Missing and Exploited Children] and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.

To its credit, Apple has built in a number of features designed to ensure that the company does not know anything about the images that apparently match, unless a certain threshold number of matches is reached. At this point Apple will then manually review each match, and if confirmed, disable the user’s account and send a report to the NCMEC. The company has provided a more detailed technical summary of the process.

However, a FAQ reveals some serious flaws in the approach. First, the new technology only applies to photos stored in iCloud Photos. This means people can avoid scrutiny quite easily: “When iCloud Photos is deactivated, no images are processed. CSAM detection is applied only as part of the process for storing images in iCloud Photos.” Another problem is the following: “The system uses image hashes that are based on images acquired and validated to be CSAM by at least two child safety organizations. It is not designed for images that contain child nudity that are not known CSAM images.” This could have the terrible effect of encouraging pedophiles to create new abusive images, rather than sharing old ones. Apple’s approach to fighting CSAM might actually make things worse.

The biggest problem concerns how Apple has implemented its idea. The approach discussed above involves client-side scanning of images to detect CSAM. This will happen whether or not the phone’s user wishes it. In other words, for the first time, Apple is explicitly taking control of people’s phones, which are therefore no longer truly “theirs”.

This is an incredibly shortsighted move, for reasons that top experts like Edward Snowden, Bruce Schneier, and the EFF have been quick to point out. One of the most important battles being fought in the world of privacy is the attempt by governments around the world to gain access to end-to-end encrypted communications using backdoors. As they have been repeatedly told, this is not possible without undermining the security of encryption. But there is a different way to gain access to the contents of encrypted communications – to spy on them before they are encrypted. That is precisely what Apple proposes with its new plans.

By presenting client-side surveillance in a positive light, Apple has just given permission for every government to demand the same approach to be applied outside CSAM. Apple tries to address that point in its FAQ: “Apple would refuse such demands and our system has been designed to prevent that from happening.” This is an extraordinarily naive statement. How will a company – even a trillion-dollar company – be able to refuse such a demand from repressive authoritarian states like China, or intrusive democratic ones like the UK? It will clearly be a matter of comply or stop selling products in that country.

Even if Apple backtracks on its plans, the first signs of which have already appeared, it may be too late. Politicians who understand little about the finer points of technology will simply say to every online service operating in their country: “See? Apple has found a way to scan for illegal material while preserving encrypted communications – just do the same for us voluntarily, or we will pass a law making it compulsory.”

Featured image by Sally V.