UK Government Halts Plans to Break End-to-End Encryption, but Privacy Risks Remain
A new law is about to be passed in the UK that will have a serious negative knock-on effect around the world. It’s part of a wider move by governments to force internet companies to break end-to-end encryption and undermine global privacy. We first reported on the UK’s Online Safety Bill two years ago. Since then, it’s been winding its way through the parliamentary process. It has grown into an unwieldy rag-bag of ideas that aims to tackle the vague concept of “harmful content.” As the Open Rights Group explains:
Under new age verification rules in the UK’s massive Online Safety Bill, all internet platforms with UK users will have to stop minors from accessing ‘harmful’ content, as defined by the UK Parliament.
This will affect adult websites, but also user-to-user services – basically any site, platform, or app that allows user-generated content that could be accessed by young people.
To prevent minors from accessing ‘harmful’ content, sites will have to verify the age of visitors, either by asking for government-issued documents or using biometric data, such as face scans, to estimate their age.
While the bill’s intentions may be good, online age verification systems are a danger to privacy. CNIL, the French data protection body, found in a study last year that the main types of age verification systems “are circumventable and intrusive,” and called for the implementation of more privacy-friendly models. Wikipedia has said that it will not age verify its users, and will withdraw from the UK if necessary.
The worst element of the proposed Online Safety Act is a requirement for online platforms to use “accredited technology” to scan messages for Child Sexual Abuse Material (CSAM). Fighting CSAM is a noble and necessary cause, but the UK government’s approach will require mass surveillance of everyone using messaging services – a terrible move for privacy. It will also require basic end-to-end encryption to be broken in some way.
The UK government has insisted that message scanning can take place without harming end-to-end encryption, but has never said how that is possible. In effect, it has been left to the online companies to come up with a solution to the problem. The fact the UK government would like to be able to reconcile privacy with the ability to scan encrypted messages doesn’t mean it’s possible. An open letter from 70 organizations, cyber security experts, and elected officials explained that message scanning “creates serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic.” In June of this year, the UK Home Secretary – that is, the Interior Minister – wrote an article in which she claimed:
The Safety Tech Challenge Fund is a UK Government-funded challenge programme that supported the development of proof-of-concept tools capable of detecting child sexual abuse material within end-to-end encrypted environments.
Through this, the Government, tech experts and wider industry partners have demonstrated that it would be technically feasible to detect child sexual abuse in environments which utilise encryption while still strongly maintaining user privacy.
The Safety Tech Challenge Fund details how its five projects aim to scan messages without breaking end-to-end encryption. In its press release, it claims that they include:
- a plug-in to be integrated within encrypted social platforms
- a suite of live video-moderation AI technologies that can run on any smart device
- software to detect CSAM before it reaches the end-to-end encryption
- an on-device nudity AI detection technology
- AI-based child sexual abuse detection technology on smartphones
In other words, its approach generally relies on client-side scanning of material before it is encrypted. Two years ago, this blog wrote about a proposal from Apple to introduce this kind of client-side scanning of material, and the various ways in which is would threaten privacy. The following month, some of the world’s top security experts explained why client-side scanning was not a magic solution to the problem of scanning end-to-end encrypted material. Widespread criticism from experts and privacy defenders led to Apple pausing its rollout. It has now gone further and canceled the entire project. Erik Neuenschwander, Apple’s director of user privacy and child safety, is quoted by Wired as follows:
Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit. It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types.
Apple’s support for client-side scanning was crucial to the UK government’s case that solutions could be found to scan end-to-end encrypted messages. Apple’s latest statement confirms that there are simply no safe ways to do this without endangering privacy, and the company is now explicitly against trying to scan encrypted messages. An evaluation by security experts of the five Safety Tech Challenge Fund projects found “the solutions under consideration will compromise privacy at large and have no built-in safeguards to stop repurposing of such technologies for monitoring any personal communications.”
With an imminent requirement to break end-to-end encryption or leave the UK, the president of the encrypted messaging service Signal, Meredith Whittaker, told the BBC that the organization “would absolutely, 100% walk.” Wired reported that the head of Meta’s WhatsApp messaging service, Will Cathcart, said similarly that WhatsApp “would not comply with any efforts to undermine the company’s encryption.”
In the face of these statements, along with the repeated warnings by the world’s top security experts that there is no technology that allows end-to-end encrypted messages to be scanned without harming privacy, the UK government has made a last-minute concession. According to the Financial Times, “The UK government will concede it will not use controversial powers in the online safety bill to scan messaging apps for harmful content until it is ‘technically feasible’ to do so.” This finally recognizes the fact that there is no feasible way to scan messages without undermining people’s right to privacy. It does leave open the possibility that the UK government could demand that online services scan encrypted messaging at a later date. The Guardian quoted a UK government spokesperson as saying: “Our position on this matter has not changed.”
There is another problem with the UK government keeping this power in reserve. Back in 2020, the governments of the Five Eyes spy network – US, UK, Canada, Australia, and New Zealand – together with India and Japan called (again) for backdoors to be built into end-to-end encryption so that they could carry out surveillance on such searches. Similarly, the EU’s Chat Control proposal aims to force online services to search all private chats, messages, and emails automatically for suspicious content. The UK’s Online Safety Act could set a dangerous precedent by giving the authorities a new legal power to demand surveillance of encrypted services. Even if that power cannot be used immediately, it may encourage other nations to bring in similar laws, creating a serious threat for online privacy in the future.
Featured image by Don S. Montgomery, USN (Ret.).