The EU Is Tackling a Serious Problem with the Wrong Approach: Real-Time Mass Surveillance
One of the most serious new threats to online privacy is currently working its way through the European Union’s legislative system. No decent person can be against a regulation “laying down rules to prevent and combat child sexual abuse” in principle, but the way the proposed legislation aims to do this is problematic.
The risk is that, if passed, the new law’s damaging ideas will be taken up around the world, following the EU’s example. A useful page on what has become known as “Chat Control“, put together by Patrick Breyer, a Member of the European Parliament for the German and European Pirate Party, lays out the law’s far-reaching impact:
The EU Commission proposes to oblige providers to search all private chats, messages, and emails automatically for suspicious content – generally and indiscriminately.
…
Other aspects of the proposal include ineffective network blocking, screening of personal cloud storage including private photos, mandatory age verification resulting in the end of anonymous communication, appstore censorship and excluding minors from the digital world.
Both Experts and the Public Have Doubts
Every one of the approaches suggested by Chat Control is bad idea, and the public knows it. In 2021, the EU conducted a public consultation on the proposed law. As a detailed analysis revealed, only 34 of the 414 comments were in favor, and 24 of those came from NGOs involved in child protection.
A larger sample size was used in a public opinion poll carried out on behalf of the Greens/EFA in the European Parliament. It found that 72% of people across the EU were against the legislation, while only 18% were in favor. German citizens in particular are concerned: an online petition has gathered over 160,000 signatures against the proposal.
Even the European Commission’s own experts have doubts. An internal document worries that Chat Control could not survive a legal challenge at the EU’s highest court, the Court of Justice of the European Union, which has previously prohibited “general monitoring” of the kind the new law clearly requires.
Similarly, the EU’s main data protection bodies, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), say that the proposal presents “serious risks for fundamental rights“. In particular, “The EDPB and EDPS consider that the Proposal, in its current form, may present more risks to individuals, and, by extension, to society at large, than to the criminals pursued for CSAM [child sexual abuse material]”.
Is Chat Control an Effective Solution?
Another analysis of the Chat Control proposal points out that criminals will be able to bypass the proposed measures by encrypting illegal material and uploading it to a standard filesharing service. It will not be possible to scan the files, since they are encrypted, and criminals can simply use links to them in order to share the illegal material with others undetectably.
The same analysis notes that there are two kinds of detection involved:
- One is of known CSAM. The preferred method for doing so is client-side scanning, whose problems were explored in a previous PIA blog post last year.
- Of even more concern is the idea of trying to detect hitherto unknown CSAM using artificial intelligence.
The European Commission has been making claims about the ability of AI to do detect CSAM, but as Felix Reda, a former MEP for the Pirate Party discovered, these were based entirely on industry claims, and not independently verified. Even then, the figures quoted – 90% accuracy & 99% precision – would see billions of false positives every day, because the volume of messaging traffic is so high.
Already the Swiss Federal Police reported that 90% of material reported by AI was not illegal. If AI detection were made mandatory for all services across the EU, that number is likely to rise.
Unforeseen Consequences of Removing End-to-End Encryption
Being able to apply these flawed techniques assumes that end-to-end encryption is broken in some way, presumably by mandating the use of backdoors. As many. previous. posts have pointed out, this is an incredibly short-sighted idea that will simply reduce privacy and security for everyone.
Chat Control will also harm the very children it is supposed to be helping by denying them safe communication channels, and by discouraging them from reporting abuse when it occurs. As the UK child protection charity NSPCC notes, the “vast majority of children who experience sexual abuse were abused by someone they knew”. That makes providing private and secure communications for victims even more vital.
What Can Be Done?
That statistic also hints at why the Chat Control approach is wrong, and what to do about it. An analysis by the leading security expert Ross Anderson, a professor at Cambridge University, emphasizes that “we should view the child safety debate from the perspective of children at risk of violence, rather than from that of the security and intelligence agencies and the firms that sell surveillance software.” Instead of the naive techno-solutionism that the EU’s Chat control proposes, Anderson suggests:
Effective policing, particularly of crimes embedded in wicked social problems, must be locally led and involve multiple stakeholders; the idea of using ‘artificial intelligence’ to replace police officers, social workers and teachers is just the sort of magical thinking that leads to bad policy. The debate must also be conducted within the boundary conditions set by human rights and privacy law, and to be pragmatic must also consider reasonable police priorities.
In other words, the EU’s proposed legislation is not only harmful – seeking to undermine, as it does, both online privacy and security – but deeply misguided. The supposed incompatibility between the need to protect children and the desire to preserve fundamental rights can be resolved not with technology, in the form of fashionable AI pixie dust, but by tackling the deeper roots of the problem, which are social.