EU unveils proposals for wide-ranging AI regulation with a global reach, and facial recognition systems flagged up as “high risk”
The European Commission has unveiled proposals for what it terms “new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence”. Evidently hoping that the new regulation will set standards for AI as the GDPR set them for privacy, the Commission says: “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way.”
At the heart of the new proposals lies a risk-based approach. AI systems considered a “clear threat” to safety, livelihoods and people’s rights, will be banned. These include AI systems that “manipulate human behaviour to circumvent users’ free will”, and “systems that allow ‘social scoring’ by governments”, of the kind pioneered in China. So-called “high-risk” AI systems will be subject to a variety of obligations before they can be put on the market, including risk assessment, high quality datasets, logging of activity, detailed documentation, human oversight and a high level of robustness, security, and accuracy. Systems considered high risk are those in critical infrastructure; educational applications that may determine access; employment applications such as software that sorts through job applications; private and public services such as credit scoring; law enforcement; border control; and administration of justice and democratic processes. Of particular interest to readers of this blog will be the following comment in the Questions and Answers document:
The use of real-time remote biometric identification in publicly accessible spaces for law enforcement purposes poses particular risks for fundamental rights, notably human dignity, respect for private and family life, protection of personal data and non-discrimination. It is therefore prohibited in principle with a few, narrow exceptions that are strictly defined, limited and regulated. They include the use for law enforcement purposes for the targeted search for specific potential victims of crime, including missing children; the response to the imminent threat of a terror attack; or the detection and identification of perpetrators of serious crimes.
It is welcome that biometric identification systems such as facial recognition are singled out for special mention, and a blanket prohibition. The European Commission’s Executive Vice-President Margrethe Vestager even went so far as to say: “There is no room for mass surveillance in our society. That’s why in our proposal, the use of biometric identification in public places is prohibited by principle.” However, she went on to note: “We propose very narrow exceptions that are strictly defined, limited and regulated.” It remains to be seen just how big those loopholes turn out to be.
In terms of enforcing the new rules, the legal framework will apply to both public and private actors inside and outside the EU if the AI system operates in the EU or affects people located there. In this respect, the AI regulation will have an extra-territorial reach just like the EU’s GDPR does. EU member states will create national authorities to supervise the application and implementation of the new rules. There will also be a new European Artificial Intelligence Board:
The Board will issue recommendations and opinions to the Commission regarding high-risk AI systems and on other aspects relevant for the effective and uniform implementation of the new rules. It will also help building up expertise and act as a competence centre that national authorities can consult. Finally, it will also support standardisation activities in the area.
As Vestager explained in her speech, the new rules will have some serious teeth:
Sanctions will apply in case of persistent non-compliance. As such, an AI provider that would not comply with the prohibition of an artificial intelligence practices [sic] can be fined up to 6 per cent of its yearly global turnover.
That’s even higher than the GDPR’s maximum fines, which are up to 4 per cent of a company’s annual global turnover – a measure of how seriously the EU wants companies to take this new framework. It’s clear that the European Commission hopes its proposals will form the basis of international norms in this area: “AI regulation is only emerging and the EU will take actions to foster the setting of global AI standards in close collaboration with international partners in line with the rules-based multilateral system and the values it upholds.”
It’s important to note that these are just the first steps towards passing legislation regulating AI in the EU and beyond. As is usual, the European Parliament and EU nations will need to come up with their own proposals based on the Commission’s draft. These will ultimately be brought together in a final consolidated text that becomes law. This process typically takes years – longer in the case of contentious areas, as is likely to be the case here. It is widely expected that AI will be the next big leap forward in the digital sphere, with huge new markets opening up, and major opportunities for new leaders to emerge, both in terms of companies and countries. Since the EU hopes to set the ground rules for that market, lobbying over the details of the new regulation will be intense, as existing players and new entrants try to tilt the playing field in their favor. Expect to be hearing much more about the battles that ensue, and about their impact on privacy.
Featured image by Cryteria.