UK Leads the Charge Against End-to-End Encryption, Calls on Tech Companies to “Nerd Harder”

Posted on Sep 21, 2021 by Glyn Moody

As Privacy News Online has reported, for years governments around the world have kept up a constant assault on end-to-end encryption. One of the leaders of this attempt to demonize a crucially important technology for preserving privacy is the UK. Wired reported back in April that the UK is trying to stop Facebook adding end-to-end encryption to all its messaging platforms.

More generally, the UK is working on what was originally called the “Online Harms Bill”, now rebranded as the “Online Safety Bill“, which aims to regulate online content and speech, and to force digital platforms to police their users more stringently. A key element of this new Bill is strengthening child safety online. That’s obviously a laudable goal, but one of the main ideas for doing this is weakening end-to-end encryption. In this, the UK government has been aided by the National Society for the Prevention of Cruelty to Children (NSPCC), a charity that has been “looking out for children for 130 years” according to its own description. Unfortunately, it shares the view of many governments that end-to-end encryption is an obstacle to achieving that goal. Recently, the NSPCC published not one, but two documents that implicitly seek to undermine support for strong and effective end-to-end encryption. In its discussion paper on the topic, the NSPCC calls for “a balanced settlement that reflects the full complexity of the issues”:

Our polling data demonstrates there is strong public support for a balanced settlement that reflects the full complexity of the issues, and that doesn’t reduce the contours of decision-making to an unhelpful zero-sum game.

The public want tech firms to introduce end-to-end encryption in a way that maximises user privacy and the safety of vulnerable users. Indeed, if platforms can demonstrate that children’s safety will be protected, there is significant support for end-to-end encryption to go ahead – a clear incentive for tech firms to invest the necessary engineering resource to ensure child abuse threat responses can continue to work in end-to-end encrypted products.

That sounds reasonable at first glance. But further inspection reveals that it is asking for the impossible: end-to-end encryption that somehow allows the companies and thus the authorities to inspect all messages sent using it. Similarly, it is impossible to “demonstrate that children’s safety will be protected”, when that means undermining end-to-end encryption, which protects children. There is even a call from the NSPCC to “nerd harder” – or, as it puts it: “for tech firms to invest the necessary engineering resource”. But as readers of this blog well know, it doesn’t work like that. Either you have genuine end-to-end encryption, in which case you can’t, by definition, inspect what is encrypted, or you don’t.

Sadly, it’s not just the NSPCC that is fighting against end-to-end encryption in the UK. The National Crime Agency, the UK equivalent of the FBI, claimed recently that Facebook’s plans to bring in end-to-end encryption across its messaging platforms “could prevent the detection of up to 20m child abuse images every year”. It’s interesting to note the use of two misleading phrases there: “could” is not the same as “will”, and “up to 20 million” includes much smaller numbers. This is simply scaremongering.

Similarly, the head of London’s police force recently wrote: “The current focus on encryption by many big tech companies is only serving to make our job to identify and stop [sophisticated terrorist cells] even harder, if not impossible in some cases.” Another call to “nerd harder” came from the UK’s Home Secretary (interior minister), Priti Patel. She even offered organizations money in the form of a new Safety Tech Challenge Fund:

The Safety Tech Challenge Fund will drive the development of innovative technologies that help keep children safe in end-to-end encrypted environments, without compromising user privacy.

Through the Fund, the UK Government is awarding five organisations up to £85,000 [around $118,000] each to prototype and evaluate innovative ways in which sexually explicit images or videos of children can be detected and addressed within end-to-end encrypted environments, while ensuring user privacy is respected.

Given the high stakes of dealing with a problem that has been recognized for years, it seems unlikely that $118,000 is going to be enough to produce a workable breakthrough solution. Pretty much the only advanced technique that seems to fit the bill even vaguely is homomorphic encryption, which would allow image analysis without decrypting message streams. But as the NSPCC admits in its report:

Homomorphic encryption technology is one possible means of protecting data privacy while analysing its content, however there is debate about its ability to detect [child sexual abuse material], how robust its privacy measures are and the extent to which it slows down communications.

A post that appeared on the ProPublica site, provoking great interest and much outrage, provides some useful context. Initially, the story seemed to claim that WhatsApp broke its end-to-end encryption to allow its monitoring department to assess whether messages are abusive or illegal. In fact, WhatsApp’s 1000 contract workers only have access to messages that have been forwarded to the company by users as possibly problematic. In other words, the end-to-end encryption is untouched, but those legitimately able to read the messages can report them if they seem against the law or the service’s terms and conditions.

Although neither a perfect nor complete solution, that does at least reconcile strong encryption with the ability to identify many of those who send abusive or illegal messages. Governments and organizations like the NSPCC would do well to spend more time building on this kind of approach, rather than demanding impossible technical solutions that will probably never come.

Featured image by Gordon Leggett.