What’s in the New EU Artificial Intelligence Act, and What Will It Mean for Global Privacy?

Posted on Dec 15, 2023 by Glyn Moody

The EU has reached a political agreement on its groundbreaking Artificial Intelligence Act, which was first proposed back in 2021. All the main elements have been agreed, but some of the technical details still need to be filled in over the next few weeks. There’s no final text yet, but press releases from the European Commission, European Parliament, and Council of the European Union provide a good guide to what the final form of the law will be.

The new rules are risk-based, as was planned from the start. This means that the greater the risk, the more stringent the regulation. For example, there is an explicit list of banned AI applications, most of which are harmful for privacy. They include:

  • Biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race)
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
  • Emotion recognition in the workplace and educational institutions
  • Social scoring based on social behavior or personal characteristics
  • AI systems that manipulate human behavior to circumvent their free will
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation)

However, these bans are not absolute. For example, there will be some exemptions for law enforcement allowing the use of real-time remote biometric identification systems in publicly accessible spaces:

the provisional agreement clarifies the objectives where such use is strictly necessary for law enforcement purposes and for which law enforcement authorities should therefore be exceptionally allowed to use such systems. The compromise agreement provides for additional safeguards and limits these exceptions to cases of victims of certain crimes, prevention of genuine, present, or foreseeable threats, such as terrorist attacks, and searches for people suspected of the most serious crimes.

In addition, the AI Act will not apply to systems that are used exclusively for military or defense purposes, since these are national competencies outside EU law. There are also exemptions for AI systems used solely for research purposes, and for people using AI for non-professional reasons.

Systems that are classified as “high risk” because of their potential harm to health, safety, fundamental rights, environment, democracy and the rule of law, are subject to a number of obligations. These include risk-mitigation systems, high-quality data sets, logging of activity, detailed documentation, user information, human oversight, accuracy and security. Examples of high-risk AI systems are:

certain critical infrastructures for instance in the fields of water, gas and electricity; medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes. Moreover, biometric identification, categorisation and emotion recognition systems are also considered high-risk.

Minimal risk systems such as AI-enabled recommender systems or spam filters will not be subject to any obligations, although companies may commit to additional codes of conduct for them.

When the EU first proposed the AI Act back in 2021, large language models and generative AI systems were unknown outside academia. Their rise over the last 12 months has meant that the EU legislators have had to come up with an entirely new framework to handle their regulation. Last week’s provisional agreement requires general-purpose AI (GPAI) systems to adhere to transparency requirements by drawing up technical documentation, complying with EU copyright law and providing detailed summaries of the material used for training.

For “high-impact” GPAI models, the requirements are even more stringent. Subject to certain criteria, developers will have to “conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report serious incidents, ensure security and report on energy efficiency.” It seems that open-source AI models will be excluded from some of these requirements.

In order to supervise the implementation and enforcement of the rules on GPAI systems, a new European AI Office within the European Commission will be created. There will also be national market surveillance authorities in each EU member state. For GPAI models, a scientific panel of experts will issue alerts on systemic risks, and help classify and test the models. Companies that break the new rules face fines ranging from €35 million (around $38 million) or 7% of global turnover, to €7.5 million ($8 million) or 1.5% of turnover, depending on the infringement and size of the company. The former level is even higher than the fines for infractions of the GDPR, which can reach a maximum 4% of global turnover.

In terms of next steps, the final text must be agreed and approved by the European Parliament and the Council of the European Union. Although last-minute problems could arise, the tough negotiations last week should have solved most major issues. The AI Act will become applicable two years after its entry into force, which is likely to be early next year. Prohibitions will already apply after six months, while the rules on GPAI will apply after 12 months. To bridge this transitional period before the AI Act becomes generally applicable, the European Commission will be launching an AI Pact. This is designed to encourage and support companies in planning their compliance with the AI Act.

The AI Act is not without its critics. For example, the Computer & Communications Industry Association (CCIA Europe) called it a “half-baked” deal. The CCIA writes that the AI Act “imposes stringent obligations on developers of cutting-edge technologies that underpin many downstream systems, and is therefore likely to slow-down innovation in Europe.” Digital rights groups were unhappy about:

  • Exceptions to a ban on live facial recognition
  • Limited restrictions on predictive policing
  • The use of emotion recognition systems in policing and border control
  • Broad loopholes in the overall level of protection because of a wide discretion for AI developers to decide their systems are not “high risk”

Similarly, the European Consumer Organisation, BEUC, felt that too many AI systems such as AI-embedded toys or virtual assistants remain unregulated. It also said that the requirements for GPAI models were too weak.

Despite these criticisms, and the continuing uncertainty about some details, the EU’s AI Act is undoubtedly a major milestone. As the EU’s GDPR did previously for privacy law, the AI Act creates a global benchmark for AI legislation, even though it only applies directly within the EU. In an interview on the new law with Tech Policy Press, the journalist Luca Bertuzzi said “there are governments across the world getting in touch with the Commission, discussing this law and how they can replicate it in their jurisdiction.”

It remains to be seen how the law will be implemented in practice, and to what extent it will be enforced and respected – even now, that is still a problem for the GDPR. But there is no doubt that the new AI Act marks a new stage in the regulation and protection of privacy online.

Featured image by Hann Lans.