Top suppliers halt sales of facial recognition technology to the police – how much of a win is that really?

Posted on Jun 25, 2020 by Glyn Moody

As this blog has noted, police forces around the world have been pushing for the routine deployment of real-time facial recognition technologies. It’s an attractive option for politicians. It offers the hope that more criminals will be arrested and convicted, and for a price that is constantly falling. As a result, it’s hard to win the argument that privacy concerns are so great that the technology should not be rolled out.

Against that background, it’s rather remarkable that in the last couple of weeks, major suppliers of facial recognition technology to police forces have voluntarily halted sales. First to move was IBM. Its CEO sent a letter to US Congress on racial justice reform. One of the things Arvind Krishna wrote is the following:

IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.

Two days later, Amazon announced that it was implementing a one-year moratorium on the police use of Rekognition, its facial recognition technology. The next day, Microsoft joined the club, reported here by TechCrunch: “we’ve decided that we will not sell facial recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.”

Back in 2018, Microsoft had already called for facial recognition technologies to be regulated by the government. In its own announcement, Amazon too noted that “We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology”. Similarly, one of the first companies to limit the availability of its technology on ethical grounds, Google, wrote back in 2018: “Google Cloud has chosen not to offer general-purpose facial recognition APIs before working through important technology and policy questions.”

As for IBM’s apparently stronger statement, an article on Fast Company points out that it isn’t quite what it seems. Deb Raji wrote in a tweet that IBM had already removed face analysis and detection capabilities back in September last year. However, she speculates that the company was perhaps still selling systems privately, and that the latest announcement is about stopping that too. In any case, IBM’s statement has loopholes. For example, it talks about “general purpose” software, leaving open the possibility that it might still provide specialized systems. It also says it is fine with uses that are “consistent with our values and Principles of Trust and Transparency”, which is quite vague.

An important question is whether IBM, Amazon, Microsoft or Google will sell facial recognition systems to US intelligence agencies. Their use of these technologies is arguably just as problematic for privacy as police use: it’s just that the latter is more visible, because it is generally less covert. That’s why calls for bans on facial recognition technologies need to be much broader. Here, for example is what Amnesty International wants to see:

We are proud to stand with organizations like the Algorithmic Justice League, the ACLU, the Electronic Frontier Foundation and others who have highlighted the dangers of [facial recognition technology]. Amnesty calls for a ban on the use, development, production, sale and export of facial recognition technology for mass surveillance purposes by the police and other state agencies.

The recent moves by US tech giants at least offer the hope that such a ban might be considered. Unfortunately, even if it comes into operation, it will have only a limited, local impact. The problem is that facial recognition technology is seen as a key strategic sector in China. Chinese AI startups have flourished thanks to generous surveillance contracts from the Chinese government. As the Los Angeles Times reported at the end of last year, these companies are now doing good business overseas:

Chinese facial recognition companies have taken the lead in serving this growing international market not least because of the advantage they have over peers in other countries: a massive domestic market and an authoritarian system where privacy often takes a back seat. According to IHS Markit, China accounted for nearly half of the global facial recognition business in 2018.

Chinese companies offering facial recognition systems are well-placed to export their systems around the world, and are able to count on Chinese government support during trade deal negotiations. Authoritarian regimes in particular are more than happy to adopt the latest Chinese technology, which is extremely effective for surveillance, albeit deeply harmful to privacy.

The situation could become even worse. According to a report in the Financial Times in December 2019, Chinese technology companies are trying to shape facial recognition standards at the UN’s International Telecommunication Union. This would not only legitimize the use of intrusive facial recognition techniques for video monitoring, city and vehicle surveillance. It would also provide Chinese companies with a competitive edge in this key sector, since their products would already be well aligned with the proposed rules, if adopted.

The recent announcements by big US tech companies in this area are welcome, but they represent only a minor skirmish in the privacy wars. The real battles are taking place in anonymous conference rooms where decisions about global facial recognition standards are being discussed and set. The outcome there is unlikely to be so favorable for strong data protection, or for human rights.

Featured image by Laitr Keiows.