Police forces around the world continue to deploy facial recognition systems, despite no evidence of their utility
Last month, this blog wrote about governments around the world continuing to trial facial recognition systems, and the growing concerns this is provoking. There’s one area in particular where facial recognition systems are deployed: law enforcement. That’s hardly a surprise, since the legal system can only operate if it identifies alleged criminals that need to be arrested, tried and punished. But it also emphasizes how facial recognition is seen by many as a natural tool for controlling populations. The most famous example is in China, where facial recognition systems are widely deployed, especially in the turkic-speaking region of Xinjiang. The situation is getting so bad that even Chinese citizens are becoming concerned.
But there is no room for people in the West to be complacent. Police forces there are also rapidly embracing facial recognition as a tool. What’s particularly troubling is that law enforcement sometimes seeks to conceal that fact. For example, drawing on previously undisclosed emails, OneZero discovered the existence of a massive, secretive network of US police departments working together to share facial recognition tools. Moreover, the police consciously tried to keep these cross-department partnerships secret from the public. In the EU, police forces are somewhat more open about their activities. AlgorithmWatch looked at 25 member states and found that at least ten have a police force that uses facial recognition. Eight plan to introduce it in the coming years, while just two countries, Spain and Belgium, do not allow it yet.
The UK is in the vanguard of deployments. Privacy News Online wrote about the use of real-time facial recognition two and a half years ago. A 2018 report showed that the Metropolitan Police’s facial recognition matches were 98% wrong, but the UK police are still keen to expand the technology’s use. Recently, real-time facial recognition surveillance was used at a soccer game in Wales. As the Assistant Chief Constable for South Wales explained:
We are deploying Automated Facial Recognition to prevent offences by identifying individuals who are wanted for questioning for football-related offences or who have been convicted of football-related criminality and are now subject to football banning orders that preclude them from attending.
Despite police attempts to frame this as a targeted operation that only involved those “convicted of football-related crimes”, the operation necessarily involved scanning every face in the crowd – including those of families and children. A privacy campaigner is currently appealing against a UK court decision that the police use of facial recognition in this way was not unlawful.
Germany has been experimenting with facial recognition systems for years, although not quite so enthusiastically as the UK police. The country’s Interior Minister, who is responsible for law enforcement, announced that the authorities would be using automatic facial recognition at 134 railway stations, and 14 airports. It will represent one of the largest roll-outs of the technology anywhere outside China. France too wants to join the facial recognition club, albeit on a smaller scale.
This widespread police enthusiasm for facial recognition comes against a background of continuing doubts about its usefulness. For example, a major seven-year trial of the technology in San Diego has just come to an end without any clear benefits emerging. Rather remarkably, the city’s law enforcement agencies didn’t track the results. A police spokesperson told Fast Company that they were unaware of any arrests or prosecutions tied to the use of facial recognition technology. We can be sure that if there had been any major successes they would have been trumpeted loudly. The fact that the city didn’t even track results suggests that they were unspectacular, to say the least. The same lack of willingness to track results in Oregon suggests a similar failure. The best a police officer could come up with was a case where the police department used a screenshot from security video footage to search for someone who was accused of stealing from a local hardware store. Hardly a ringing endorsement.
Against those poor gains, there are serious concerns. One is that access to facial recognition data may be sold on the black market, as is already happening in Moscow. Another, more subtle issue, is that police forces are partially fabricating facial identity data in order to obtain matches, according to work from the Center on Privacy & Technology at Georgetown Law:
These techniques amount to the fabrication of facial identity points: at best an attempt to create information that isn’t there in the first place and at worst the introduction of evidence that matches someone other than the person being searched for. During a face recognition search on an edited photo, the algorithm doesn’t distinguish between the parts of the face that were in the original evidence – the probe photo – and the parts that were either computer generated or added in by a detective, often from photos of different people unrelated to the crime. This means that the original photo could represent 60 percent of a suspect’s face, and yet the algorithm could return a possible match assigned a 95 percent confidence rating, suggesting a high probability of a match to the detective running the search.
Despite the lack of evidence that facial recognition is worth the risks, the roll-outs continue. Police forces naturally like the idea of using new tools that might help them catch more criminals, while technology companies want to sell into this potentially lucrative market. Resisting both those powerful forces is going to be hard, but campaigns to get facial recognition banned are nonetheless gaining steam. One hopeful sign is there are already discussions within the European Commission for bringing in a five-year ban on using the technology in public places.
Featured image by dave conner.