Arsenic in the water of democracy: UK police, politicians and privacy activists clash over facial recognition deployments

Posted on Aug 14, 2019 by Glyn Moody

Last week’s post looked at the increasing number of moves to rein in, or even ban, the use of facial recognition technologies in the US. Another country at the forefront of exploring the legal, social and ethical issues raised by the technology is the UK. Problems with the use of facial recognition technologies by the UK police were discussed by Privacy News Online last year. Since then, there have been a number of developments. For example, in May this year, an office worker from Wales began a crowdfunded legal action against South Wales police for what he claimed was an unlawful violation of his privacy because he was subject to facial recognition scanning by the police. He said that he was “distressed by the apparent use of the technology” and also argued that it breached data protection and equality laws.

According to a report in the Guardian, the lawyer representing the police said: “It’s difficult to say that an automated immediate computerised comparison is more intrusive than police officers sitting down looking at albums of photographs.” However, that overlooks the key difference between automated facial recognition, and the traditional kind. With the former, there are almost no limits on how many people can be scanned and matched, while the latter is clearly limited by the human ability to look at and – crucially – remember faces. With automated systems, it will soon be possible to scan everyone in a crowd, no matter how big, and to store data about what they do, indefinitely. That’s clearly far more intrusive, and changes the nature of a “public” place, which becomes one where you can be monitored without restrictions. As a human rights lawyer, Martha Spurrier, put it recently:

Once that is happening at scale, what you have is a mechanism of social control. When people lose faith that they can be in public space in that free way, you have put arsenic in the water of democracy and that’s not easy to come back from.

Support for caution when rolling out facial recognition systems for the police comes from the London Policing Ethics Panel. Back in May it produced its report on “live facial recognition” (LFR) for the Metropolitan Police Service (MPS), London’s police force. One of its main conclusions is as follows:

Marginal benefit would not be sufficient to justify LFR’s adoption in the face of the unease that it engenders in some, and hence the potential damage to policing by consent. Clearly there is no benefit to be gained from adopting an ineffective technology, and we assume the MPS would not wish to do so.

Precisely this view was echoed in a blog post by the UK’s Information Commissioner, responsible for overseeing the enforcement of privacy law in the country. Elizabeth Denham wrote: “I believe that there needs to be demonstrable evidence that the technology is necessary, proportionate and effective considering the invasiveness of LFR.” But current facial recognition technologies are far from effective: as this blog reported last December, a report last year showed that the Metropolitan Police’s facial recognition matches were 98% wrong. A new study from the Human Rights, Big Data & Technology Project, based at the University of Essex Human Rights Centre, suggests that the situation hasn’t improved much since then. “Across the six trials that were evaluated, the LFR technology made 42 matches – in only eight of those matches can the report authors say with absolute confidence the technology got it right.” Moreover, the report raised an issue that is potentially even more problematic for police use of the technology:

It is highly possible that police deployment of LFR technology may be held unlawful if challenged before the [UK] courts. This is because there is no explicit legal authorisation for the use of LFR in domestic law, and the researchers argue that the implicit legal authorisation claimed by the Metropolitan Police – coupled with the absence of publicly available, clear online guidance – is unlikely to satisfy the ‘in accordance with the law’ requirement established by human rights law, if challenged in court.

Despite these technical and legal issues, some senior police officers are keen to deploy facial recognition all the time. As Sky News reported:

Facial recognition in China is “absolutely correct” and “spot on”, the head of the Metropolitan Police Federation has said, calling for it to be deployed in London “on a 24-hour basis”.

The UK’s previous Home Secretary, the minister in charge of internal affairs, including policing, also supported its use. But an influential UK parliamentary committee on science and technology has issued a report that comes down firmly against rolling it out now:

We reiterate our recommendation from our 2018 Report that automatic facial recognition should not be deployed until concerns over the technology’s effectiveness and potential bias have been fully resolved. We call on the Government to issue a moratorium on the current use of facial recognition technology and no further trials should take place until a legislative framework has been introduced and guidance on trial protocols, and an oversight and evaluation system, has been established.

As the above indicates, in the UK there is currently something of a struggle underway. On the one hand, there are the authorities who wish to deploy facial recognition technologies as widely as possible. On the other, there are politicians, experts, and human rights campaigners who call for a ban until the technology has improved enough to be trustworthy. Still lacking in the UK, as elsewhere, is a robust legal framework for handling the many privacy issues that facial recognition raises. As technology races ahead, the pressure will be on lawmakers around the world to come up with something quickly. From this point of view, the next few years promise to be extremely important for facial recognition – and exciting.

Featured image by PC Matt Hone.