European Parliament Calls for Bans on AI-based Biometric Recognition in Public Spaces, Predictive Policing, and Social Scoring

Posted on Oct 25, 2021 by Glyn Moody

Back in April, Privacy News Online reported on an important set of proposals from the European Commission to regulate the use of artificial intelligence within the EU. It contained some good ideas, and warned that AI-based facial recognition systems were “high risk”. But that didn’t go far enough for the European Data Protection Board (EDPB), and the European Data Protection Supervisor (EPDS), two important offices tasked with monitoring the protection of personal data and privacy within the EU. Responding to the European Commission’s proposals on AI, the EDPB and EDPS called for a ban on the “use of AI for automated recognition of human features in publicly accessible spaces”.

Although an useful statement, the joint opinion doesn’t put any direct pressure on the European Commission. However, that’s not true of a resolution passed by the European Parliament, demanding strong safeguards when AI tools are used in law enforcement. Even though the vote is advisory, it is nonetheless a clear signal of what the European Parliament expects to see in the new EU AI law, and the changes that it will want from the European Commission when it comes to drawing up and voting on a final text. Here are the main demands:

permanent prohibition of the use of automated analysis and/or recognition in publicly accessible spaces of other human features, such as gait, fingerprints, DNA, voice, and other biometric and behavioural signals;

a moratorium on the deployment of facial recognition systems for law enforcement purposes that have the function of identification, unless strictly used for the purpose of identification of victims of crime, until the technical standards can be considered fully fundamental rights compliant, results derived are non-biased and non-discriminatory, the legal framework provides strict safeguards against misuse and strict democratic control and oversight, and there is empirical evidence of the necessity and proportionality for the deployment of such technologies; notes that where the above criteria are not fulfilled, the systems should not be used or deployed

But the EU resolution is not just about biometric recognition, even though that is the main concern. Other areas where the EU politicians want to see the use of AI banned include predictive policing, social scoring, and the use of software to propose judicial decisions. The report also lays down important general principles that it wants applied in this area, including algorithmic explainability, transparency, traceability and verification. According to the EU Parliament, these are a necessary part of oversight, in order to ensure that the development, deployment and use of AI systems for the judiciary and law enforcement agencies comply with fundamental rights, and are trusted by EU citizens. Doing so will also help to ensure that results generated by AI algorithms can be made intelligible to users, and to those who may find themselves subject to these systems, and that there is transparency on the source data and how the system arrived at a certain conclusion. Interestingly, the resolution calls for the software used in such systems to be open source “where possible”, in order to offer the maximum transparency at all levels:

in order to ensure technical transparency, robustness, and accuracy, only such tools and systems should be allowed to be purchased by law enforcement or judiciary authorities in the Union whose algorithms and logic is auditable and accessible at least to the police and the judiciary as well as the independent auditors, to allow for their evaluation, auditing and vetting, and that they must not be closed or labelled as proprietary by the vendors; points out, furthermore, that documentation should be provided in clear, intelligible language about the nature of the service, the tools developed, the performance and conditions under which they can be expected to function and the risks that they might cause; calls therefore on judicial and law enforcement authorities to provide for proactive and full transparency on private companies providing them with AI systems for the purposes of law enforcement and the judiciary; recommends therefore the use of open source software where possible;

The report names two specific examples of AI-based facial recognition systems that it regards as examples of what not to do. One is the iBorderCtrl system, discussed on this blog back in 2018. The other is the rather better-known Clearview AI. When Privacy News Online last wrote about Cleaview AI, it was believed the system had scraped some 3 billion photos from the Internet. In an interview with Wired, the company’s CEO Hoan Ton-That says that figure has increased to an astonishing 10 billion images. That would represent multiple photos of many people. Moreover, he says that Clearview AI has added more advanced features:

Ton-That says it is developing new ways for police to find a person, including “deblur” and “mask removal” tools. The first takes a blurred image and sharpens it using machine learning to envision what a clearer picture would look like; the second tries to envision the covered part of a person’s face using machine learning models that fill in missing details of an image using a best guess based on statistical patterns found in other images.

It is clear that we are fast approaching the point where a significant proportion of the world’s population can be recognized in seconds using facial recognition software and a massive database like that of Clearview AI. That makes the strong EU resolution timely, and the need for something similar in the US a pressing one.

Featured image by Hann Lans.