The EU Artificial Intelligence Act: our 16 key takeaways

Article
EU Law

The Council of the EU recently unanimously approved the Artificial Intelligence (AI Act). This first comprehensive AI regulation in the world sets out harmonised rules for the placing on the market, putting into service and use of artificial intelligence systems (AI Systems) in the Union, following a risk-based approach.

Originally proposed by the European Commission in April 2021 to address the challenges posed by rapid technological advancements and potential risks associated with artificial intelligence, the legislative process faced a notable disruption in December 2022 with the arrival of ChatGPT (see our earlier blog posts on the on the European Commission’s proposal here). This necessitated a change to the draft text, creating specific regulations for generative AI (as outlined in our blog posts on the European Parliament’s amendments here and here). The European Parliament’s Committee on the Internal Market and Consumer Protection adopted the AI Act today, after which it will be submitted for a plenary vote provisionally scheduled for 10-11 April 2024.

This post is the first in a series of posts we will publish on AI and the AI Act. In the following episodes in this AI series, we will delve deeper into specific topics, aspects of the AI Act and their interaction with other rules, sectors and practices. In this first episode, we have set out our initial key takeaways on the AI Act based on the text as currently approved by the Council of the EU.

Takeaway 1: AI systems are broadly defined, with a focus on autonomy

The AI Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”, which follows the OECD’s latest definition. 

The key elements in this definition are ‘infers’ and ‘autonomy’, which clearly differentiate an AI system from any other software where the output is pre-determined (if x then y) by a strict algorithm. This definition is intentionally broad to ensure that the AI Act does not become outdated in the near future. It clearly moves away from the original definition of AI systems, which linked the concept to a pre-defined list of technologies and methods, adopting a technology-neutral and uniformed approach.

Takeaway 2: A closed list of prohibited AI Systems, with a nuanced mechanism for real-time remote biometric identification

The AI Act contains a closed list of prohibited AI practices:

  • using subliminal techniques or purposefully manipulative or deceptive techniques to materially distort behaviour, leading to significant harm; 
  • exploiting vulnerabilities of a person or group due to specific characteristics, leading to significant harm;
  • biometric categorisation systems that individually categorise a person based on sensitive information, except for labelling or filtering lawfully acquired biometric datasets in the area of law enforcement;
  • social scoring systems;
  • real-time remote biometric identification systems in the public for law enforcement purposes;
  • predictive policing based solely on profiling or personality traits, except when supporting human assessments based on objective, verifiable facts linked to criminality;
  • facial recognition databases based on untargeted scraping; and
  • inferring emotions in workplaces or educational institutions, except for medical or safety reasons.

The ban on real-time biometric identification for law enforcement purposes was the topic of much debate in the European institutions. The prohibition does not apply where these systems are used for any of the listed specific purposes, such as searching for victims of human trafficking or sexual exploitation, or for the prevention of terrorist attacks. In principle, relying on such an exception will require thorough assessments, technical and organisational measures, notifications and a warrant.

Takeaway 3: dual definition of high-risk AI systems

A major part of the AI Act entails the strict and extensive regulation of high-risk AI systems. It will therefore be of the utmost importance in practice for a company engaged in AI to determine whether the AI system it develops, imports, distributes or deploys constitutes a high-risk AI system.

The AI Act considers two types of AI systems that are regarded as high-risk:

  1. AI intended to be used as a product (or the security component of a product) covered by specific EU legislation, such as civil aviation, vehicle security, marine equipment, toys, lifts, pressure equipment and personal protective equipment.
  2. AI systems listed in Annex III, such as remote biometric identification systems, AI used as a safety component in critical infrastructure, and AI used in education, employment, credit scoring, law enforcement, migration and the democratic process.

We can expect guidelines specifying the practical implementation of classification of AI systems, completed by a comprehensive list of practical examples of high-risk and non-high-risk use cases on AI systems no later than 18 months after entry into force of the AI Act.

Takeaway 4: important exception to the qualification of high-risk AI system

The AI Act has added an exception to this qualification: if the AI system from the second category of high-risk AI systems (Annex III) does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, it will not constitute a high-risk AI system. This is the case if the system is intended to perform a narrow procedural task; improve the result of a previously completed human activity; detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or perform a preparatory task to an assessment. If, however, the AI system performs profiling of natural persons, it is always considered a high-risk AI system.

This exception will become very important in practice, as a lot of AI system providers will try to argue that their system does not pose such risks, in order to avoid the high regulatory burden and cost that come with the qualification of high-risk AI. Nevertheless, if a provider wishes to rely on this exception, it will have to document its assessment under this exception. Even if it could successfully rely on the exception, the AI system will still need to be registered in the EU database for high-risk AI systems before it is placed on the market or put it into service.

Takeaway 5: broad, burdensome and extensive obligations for high-risk AI systems

Providers of high-risk AI systems must meet strict requirements to ensure that their AI systems are trustworthy, transparent and accountable. Among other obligations, they must conduct risk assessments, use high-quality data, document their technical and ethical choices, keep records of their system’s performance, inform users about the nature and purpose of their systems, enable human oversight and intervention, and ensure accuracy, robustness, and cybersecurity. They must also test their systems for conformity with the rules before placing them on the market or putting them into service, and register their systems in an EU database that will be accessible to the public.

Takeaway 6: obligations across the value chain

The AI Act imposes strict obligations not only on the ‘provider’ of a high-risk AI system, but also on the ‘importer’, ‘distributor’ and ‘deployer’ of such systems.

The importer’s and distributor’s obligations mainly concern the verification of whether the high-risk AI system they import or distribute is compliant. Broadly speaking, the importer needs to verify the system’s conformity through the verification of various documentation, whereas the distributor is required to verify the CE (conformité européenne) conformity.

Takeaway 7: obligations for deployers (users) of high-risk AI systems

The deployer, formerly known as the user of the AI system, is also subject to a set of obligations when it deploys a high-risk AI system. One key obligation with important consequences for any potential liability discussion with the provider is that the deployer must use the high-risk AI system in accordance with the provider’s instructions of use. When a company or its customers suffers damage following the use of a high-risk AI system, the AI system’s provider’s main argument will most likely be that the deployer did not use the AI system in accordance with the instructions of use.

The deployer also has the obligation to install human oversight to the extent possible, and to monitor the input data and operation of the system. It must keep the automated logs for at least six months.

Takeaway 8: fundamental rights impact assessment for banks, insurers and governments

Public sector bodies, private entities providing public services (i.e. education, healthcare, housing, social services, and entities engaged in credit scoring or life and health insurance are required to make a fundamental rights impact assessment (FRIA) prior to deploying the high-risk AI system. This assessment requires these entities to list the risks, oversight measures, risk mitigation measures, affected categories of natural persons, intended frequency of use, and the deployer’s processes for which the system will be used.

Takeaway 9: shifting the responsibilities along the value chain for high-risk AI systems

There is a mechanism similar to the rules on product liability whereby an entity other than the provider may be considered the provider. An importer, distributor, deployer or any third party will be considered a provider of the high-risk AI system, and will therefore be subject to the large list of obligations under the AI Act, if one of three conditions are met:

  • they have put their name or trademark on the system after it has already been placed on the market or put into service;
  • they have made substantial modifications after that placing on market/putting into service, provided that the system remains high-risk; or
  • they have modified the intended purpose of the AI system, which renders the system high-risk.

Takeaway 10: right to an explanation and battle for trade secrets

For a number of years, there has been a lot of speculation about whether the GDPR entitles the data subject to an explanation when a controller engages in automated individual decision-making, including profiling, which has legal or similar effects for that data subject (see our previous blog post on this topic here). The AI Act now explicitly confirms this right, but only for the high-risk AI systems listed in Annex III: an affected person now has a right to meaningful explanations on the role of the AI system in the decision-making and the main elements of the decision made.

In practice, there will be a battle between persons requesting explanations, and providers blocking or limiting such requests based on a trade secret. A good example are credit-scoring algorithms, which constitute a high-risk AI system. The business model and unique selling point for a credit scoring agency will lie in the exact weights and parameters used in the model, which can for a large part be protected by trade secrets. It is likely that, in practice, a balancing exercise will have to be made, following the opinion of advocate general Pikamäe in a recent case (case C-634/21) before the Court of Justice of the European Union: while, in principle, protection of trade secrets or intellectual property constitutes a legitimate reason for a credit information agency to refuse to disclose the algorithm used to calculate the score for the data subject, it cannot under any circumstances justify an absolute refusal to provide information, all the more so where there are appropriate means of communication that aid understanding while guaranteeing a degree of confidentiality.

Takeaway 11: broad right to complain

The AI Act grants a right to lodge a complaint with a market surveillance authority to any natural or legal person having grounds to consider that the AI Act has been infringed. This is an unusually large personal scope to exercise this right, as there is practically no requirement of standing. This is clearly different from other instruments, such as the GDPR, where data subjects may submit a complaint only if the processing of personal data relates to them.

Takeaway 12: general purpose AI models are not systems

General purpose AI (GPAI) models are specifically regulated and classified under the AI Act. The AI Act distinguishes between obligations that apply to all GPAI, and additional obligations for GPAI models with systemic risks. As models are regulated separately from AI systems, a model will never constitute a high-risk AI system, as it is not an AI system. A GPAI system built on top of a GPAI model, on the other hand, may constitute a high-risk AI system (see Takeaway 3).

Providers of GPAI models are subject to separate obligations that can be considered a light version of the obligations for AI systems. Among other things, they must create and maintain technical documentation, draw up a policy on how to respect copyright law, and create a detailed summary of the content used for training the GPAI model.

Providers of GPAI models with systemic risks have additional obligations, including performing model evaluations, assessing and mitigating systemic risks, documenting and reporting serious incidents to the AI Office and national competent authorities, and ensuring adequate cybersecurity protection. 

Takeaway 13: special transparency obligations for AI systems and GPAI models

As a third category of regulated AI systems (besides prohibited AI practices and high-risk AI), the AI Act imposes transparency obligations for four categories of AI systems and GPAI models:

  • AI systems intended to directly interact with natural persons (e.g. AI companions);
  • AI systems, including GPAI systems, generating synthetic audio, image, video or text content (e.g. Midjourney, DALL-E); 
  • emotion recognition systems or biometric categorisation system (e.g. ShareArt); and
  • deep fakes.

In these cases, the user will have to be informed about the AI system. In some cases, the content will have to be labelled in a machine-readable way so that it can be identified as artificially generated or manipulated content. The AI Act provides for exceptions to this obligation in some circumstances for law enforcement, or when the AI system is used for artistic, satirical, creative or similar purposes.

Takeaway 14: complex and layered compliance and enforcement structure

The AI Act will go hand in hand with a complex and layered governance structure involving multiple entities, such as notifying and notified bodies, conformity assessment bodies, an AI Board, an AI Office, national competent authorities, and market surveillance authorities. Bodies such as the AI Office will also support entities in scope through the development of codes of practice based on stakeholder dialogue. Moreover, these entities will also play a role in the various measures in support of innovation such as AI regulatory sandboxes and measures for SMEs and start-ups.

Takeaway 15: enforcement and next steps

The Act gives market surveillance authorities the power to enforce the rules, investigate complaints, and impose sanctions for non-compliance. The penalties can be very high. Engaging in a prohibited AI practice can lead to a penalty of up to EUR 35 million or 7% of the total worldwide annual turnover for companies, depending on the severity of the infringement. For high-risk AI systems, the penalty may be as high as EUR 15 million or 3%.

We expect the AI Act to be published mid-2024. The AI Act will enter into force 20 days after publication in the Official Journal of the EU. Most of its provisions will apply after 24 months. The rules on prohibited AI systems will apply after 6 months, the rules on GPAI after 12 months, and the rules on high-risk AI systems after 36 months. 

Takeaway 16: beyond the AI Act

One would almost forget that the AI Act is only one piece in the puzzle of laws and regulations that apply to AI systems. There are multiple other components of the law that will play a major role in how an organisation designs, tests, trains and provides its AI system. Notable examples include:

  • intellectual property law: patentability of AI, copyright protection of software and licences to use content for training purposes;
  • data protection law: transparency, the principal prohibition of profiling and the requirement of a lawful basis;
  • contracts and liability: building sufficient control and oversight as a user of an AI system through contractual clauses, the AI Liability Directive; and
  • cybersecurity: both general cybersecurity (NIS II) and sector-specific (DORA) requirements.

If you or your organisation has any questions regarding AI and the AI Act, please do not hesitate to contact the AI experts in our Brussels, Amsterdam or Luxemburg office. Our cross-practice teams of specialists allow for the broad and comprehensive legal expertise needed when dealing with the multidisciplinary concept of AI and the law.

Critical deadlines

The AI Act will enter into force 20 days after publication in the Official Journal of the EU. After entry into force, the following compliance timelines will apply:

  • 6 months: enforcement of prohibited AI practices will commence.
  • 12 months: GPAI obligations will take effect, except for GPAI models that have been placed on the market before this date; these will apply after an additional 24 months.
  • 24 months: the AI Act will apply and most other obligations will take effect from this date. 
  • 36 months: obligations for high-risk systems listed in Annex II will take effect.
  • 48 months: obligations for high-risk AI systems intended for use by public authorities that were on the market before the entry into force of the AI Act will take effect.

Introducing a series on Artificial Intelligence

AI has the potential to transform various domains as it seeps through all industries and sectors. The AI Act is a landmark initiative that aims to make the EU a global leader in ethical and human-centric AI, while fostering innovation and competitiveness. The AI Act provides guidance with its broad framework, but questions remain regarding the practical implementation of the AI Act. A French National Assembly commission has already raised issues about generative AI models that are trained on copyrighted material. The commission suggested revising the EU Copyright Directive to reflect the technological advancement of generative AI and the way in which they affect intellectual and industrial properties. 

In our new Artificial Intelligence series, we will explore how the AI Act affects various legal aspects and sectors, such as personal data protection, employment, and intellectual property rights.