Back in 2014, Stephen Hawking said, “The development of full artificial intelligence could spell the end of the human race.” Although the use of artificial intelligence is nothing new and dates back to Alan Turing (the godfather of computational theory), prominent researchers – along with Stephen Hawking – have expressed their concerns about the unregulated use of AI systems and their impact on society as we know it.
Although this concern has been raised since the dawn of computing, AI now powers so many real-world applications, ranging from facial recognition to fraud prevention, that requests to regulate these systems have finally led to a proposal for a European framework introducing new obligations for providers, importers, distributors, and users of artificial intelligence.
Proposed AI Regulation
On 21 April 2021, the European Commission presented a proposal for a regulation (the "Regulation"), providing a legal framework for the development and use of artificial intelligence systems ("AI systems"). Though many organisations and institutions have published guidelines for AI, the EU is the first in the world to present a proposal for such far-reaching regulation. Regulating AI has been high on the EU's agenda for some time; the Regulation is part of the EU's broad strategy of 'shaping the digital future'. The aim of this Regulation is to create an internal market in which safe and reliable AI systems are facilitated and market fragmentation is avoided in order to ensure legal certainty with regard to AI systems. The idea is that a well-functioning Regulation will increase public trust in the safety of AI, which in turn will increase development and use of AI.
Who will the Regulation apply to?
The Regulation is intended to apply to public and private business providers placing AI systems on the EU market, users of AI systems located in the EU, and providers and users of AI systems located outside the EU, where the output produced by the AI system is used in the EU. The Regulation introduces requirements that may apply to providers, importers, distributors and users for the development, marketing and commissioning of high-risk AI systems.
What systems are regulated?
The Regulation adopts a definition of AI that includes other algorithmic systems (e.g. decision trees, search methods) in addition to ‘machine learning’ techniques. The Regulation implements a risk-based approach: a distinction is made between unacceptable risk, high-risk, medium-risk and low-risk systems:
- Unacceptable risk: a limited set of AI applications that pose a clear threat to the security, livelihoods and rights of EU citizens, for example systems that enable social scoring by governments, and real-time biometric identification systems used in public for law enforcement purposes.
- The use of these systems is prohibited.
- High-risk: i) AI systems that are included as (part of) a product, as a security component, including medical devices, toys or cars and ii) AI systems with potential fundamental rights implications, relating to (for example) recruitment, credit scoring or critical infrastructure.
- The bulk of the regulation addresses strict obligations for high-risk AI systems, including ensuring transparency, establishing risk management systems, using high-quality datasets and monitoring continuous compliance.
- Limited risk: AI systems such as human interaction systems (chatbots), emotion recognition systems and ‘deepfakes’.
- For these systems, specific transparency obligations apply: users must be made aware that they are interacting with an AI system.
- Low-risk: Applications that represent minimal or no risk for users.
- These applications remain unregulated. However, the Regulation encourages providers to voluntarily apply codes of conduct to their AI system, thereby effectively complying with obligations for high-risk systems, even though not strictly required.
Each Member State will appoint national authorities to supervise the application and implementation of the Regulation. Each Member State will thereby designate an authority as the national supervisory authority. This authority will also be represented in the European Artificial Intelligence Board, which will be set up under the Regulation. This Board will advise the European Commission in order to assist with implementing and executing the Regulation.
The Regulation allows the Member States to oversee enforcement of the Regulation, but does require that Member States take all necessary actions to ensure the Regulation is properly executed, including effective and deterring sanctions. To that end, the Regulation specifically mentions the possibility of penalties and specifies the maximum amount of a penalty for certain categories of infringement of the provisions of the Regulation.
Enforcement in the Netherlands?
The Regulation leaves the national authorities somewhat free in how they enforce it. It is not clear which Dutch supervisor will enforce the Regulation. Research commissioned by the Ministry of the Interior shows that the generic, cross-sector supervision for both government and private sector can be further strengthened. In addition to ongoing actions, continued attention is required to strengthen the available capacity of supervisory authorities and the cooperation of the joint supervisory authorities.
As mentioned in the introduction, one of the goals of the regulation is to increase trust in AI in order to stimulate AI innovation. The Regulation creates possibilities for businesses specifically aimed at increasing innovation. One is the possibility to experiment with AI in ‘sandboxes’. These are virtual, regulated test environments where an AI system can run and be tested on its own, without interfering with other systems. The other opportunity the Regulation facilitates concerns digital information hubs, where companies can share information on and experiences on with AI.
Position of the Netherlands
The Dutch position on the proposal was announced on 31 May 2021. The government is positive overall, but has posed several questions and raised objections regarding feasibility, definitions used, and lack of room for evaluation. The European Data Protection Supervisor – of which (a representative of) the Dutch Data Protection Authority is a member – also expressed its opinion on the Regulation on 18 June 2021, outlining inter alia the risks of the use of remote biometric identification of individuals in publicly accessible spaces and the risks of AI systems using biometrics to categorise individuals into clusters based on ethnicity, gender, political or sexual orientation, or other potential grounds of discrimination. In June 2021, the Dutch government indicated in a letter to Parliament that it would create a legal basis under the Dutch GDPR Implementation Act for processing such special categories of personal data, in order to prevent discrimination in algorithmic systems.
The European Parliament and the Member States will investigate the proposal. This is expected to take some time, given the significant impact of the Regulation. Once the final Regulation is adopted, it will become directly applicable throughout the EU. It is expected that the legislation will enter into force in about two or three years and will apply from 24 months following the entering into force of the Regulation.
With special thanks to Jolijn Gijsen.