Police forces around the world continue to push for routine – and real-time – facial recognition capabilities
Facial recognition crops up on this blog more than most technologies. That’s in part because the underlying AI is advancing rapidly, boosting the ability of low-cost systems to match faces to those in databases. The Clearview saga is a good example of this, where an unheard-of startup has put together what is claimed to be an extremely powerful system. More details are emerging about Clearview’s client list, thanks to a leak, reported here by BuzzFeed:
The United States’ main immigration enforcement agency, the Department of Justice, retailers including Best Buy and Macy’s, and a sovereign wealth fund in the United Arab Emirates are among the thousands of government entities and private businesses around the world listed as clients
And it seems that investors in the company, as well as its clients and general friends, had access to the tool as it was being developed, which they sometimes used for debatable purposes:
Those with Clearview logins used facial recognition at parties, on dates and at business gatherings, giving demonstrations of its power for fun or using it to identify people whose names they didn’t know or couldn’t recall.
If nothing else, that’s a chilling reminder of how this technology could be used by those already wielding power in society to bolster their position by secretly spying on rivals and possible threats. The more powerful the technology, the more attractive it will be for those people, and the greater the potential harm it can cause.
According to one report, Clearview aims to build on its database – and perhaps on its current notoriety – by developing surveillance cameras and augmented reality glasses that draw on its software. At least people can now use California’s Consumer Privacy Act to see what information Clearview holds on them. The EU’s GDPR should allow the same access to details for EU citizens, although no one has reported successfully doing so yet. In another interesting development, the state of Vermont has sued Clearview, accusing the company of illegally collecting photos of the state’s residents in order to build a “dystopian surveillance database“.
Unfortunately, it’s not just Clearview we need to worry about. OneZero has a report about a company called Wolfcom, which is developing live facial recognition for US police body cameras. It’s a significant move, because Axon, the largest company producing body cameras in the US, stated last year that it wouldn’t add facial recognition to its systems. It may be that Wolfcom’s announcement will push Axon to follow suit, and facial recognition will become the norm for police body cameras in the US. That’s a classic ratchet effect whereby groundbreaking moves by one player can pull the whole industry after them. In the context of the erosion of privacy, that’s bad news.
In the EU, a leaked report showed that police forces there are keen for the creation of a pan-European network of facial recognition databases. This might not be quite on the scale of Clearview’s alleged 3 billion faces, but would nonetheless be a large-scale and powerful system. If created, it is also likely to encourage police forces across the EU to use it routinely, since it will be a major official EU resource. This will normalize the application of facial recognition by the police in Europe.
A few weeks ago, there was a tantalizing report that the EU would go the other way, and ban facial recognition in public spaces. But when the region’s AI strategy was published, it omitted any such commitment. More generally, the EU paper “On Artificial Intelligence – A European approach to excellence and trust“, avoids talking about facial recognition in depth, even though it is arguably the most problematic application of AI today. Taken together, those seem to indicate that the European Commission is planning to allow facial recognition deployment in public spaces. There will presumably be “safeguards”, but once permission has been granted, abuse is more likely.
The UK government has no qualms about openly supporting facial recognition projects, as Privacy International noted. For example:
FACER2VM is a five-year research programme aimed at making face recognition ubiquitous by 2020.
The project will develop unconstrained face recognition technology for a broad spectrum of applications. The approach adopted will endeavour to devise novel machine learning solutions, which combine the technique of deep learning with sophisticated prior information conveyed by 3D face models.
London’s Metropolitan Police is already rolling out real-time facial recognition. That’s despite the dismal results its latest test deployments have produced. At one site, facial recognition captured 8,600 faces, and checked them against a watchlist of 7,292 people. The system flagged up eight alerts; seven were false positives, and only one was a true positive.
As a reminder of potential pitfalls, here’s a story from Argentina, where the wrong man was detained in July 2019 for a robbery that happened three years before that, in a city about 400 miles away. It turns out the robber was someone with the same name, but the facial recognition system had flagged him up, and the police acted on that information. Instead of admitting their mistake immediately, the police held the man for six days before releasing him. This highlights the danger of using “intelligent” systems incorporating facial recognition: they can encourage law enforcement to act in a less-than-intelligent fashion, simply because of an unjustified faith in the abilities of AI-based systems. We can expect to see more of these problems as police forces around the world embrace real-time facial recognition as an apparently “easy” way to boost their ability to spot people of interest.
Featured image by vero66braud.