Law and AI (part 2): towards a European framework in line with the ethical values of the EU?

Article
EU Law

On 20 October 2020, the European Parliamentary Assembly adopted, on the basis of three reports, three resolutions on AI from three different perspectives. These resolutions have been published in the Official Journal on 6 October 2021. The objective of the three reports and the three resolutions was to influence the Commission legislative proposal on AI. In three different blogs, we outline the key suggestions proposed by the European parliament. In this one, we discuss the report and the resolution on AI and a framework of ethical aspects.

Introduction

The European Commission published on 24 April 2021 a proposal for a regulation “laying down harmonised rules on artificial intelligence”. 

However, the process began two years ago. 

On 10 April 2018, during the Digital Day, 25 European countries signed up to cooperate on this matter by a common Declaration of cooperation on Artificial Intelligence. On 25 April 2018, the Commission issued a Communication called “AI for Europe”.  In June 2018, a High-Level Expert Group on Artificial Intelligence, a group of 52 experts to support the implementation of the European Strategy on Artificial Intelligence, presented their Policy and Investment Recommendations for Trustworthy AI. The Commission issued on 8 April 2019 a Communication to build trust in human centric AI and, on 19 February 2020, a white paper called “White Paper on Artificial Intelligence: a European approach to excellence and trust” and a report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics. At the beginning of October 2020, the Committee on Legal Affairs of the European Parliament adopted three different reports on AI from three different perspectives. Then, on 20 October 2020, the European Parliamentary Assembly adopted, on the basis of these three reports, three resolutions on AI from these same perspectives. These resolutions have recently (on 6 October 2021) been published in the Official Journal.

The objective of the three reports and the three resolutions was to influence the Commission legislative proposal on AI. In three different blogs, let us review the key suggestions proposed by the European parliament. In this one, we discuss the report and the resolution on AI and a framework of ethical aspects (see here). The two other blogs are available here and here.

What is the content of these two texts on AI and ethical issues?

The resolution on a framework of ethical aspects for AI, robotics and related technologies is the longest of the three resolutions. With the report on the same matter, it underlines the need to develop a human-centric and human-made AI. Human beings have to remain the anchor of development of the AI: AI has to increase the well-being of human beings. That includes in particular that the AI technologies and creations respect the fundamental rights included in the European Charter of fundamental rights and, more generally, the principles of necessity and proportionality. Ethical issues and implications must be taken into account: AI cannot be deployed blindly and without a clear framework. AI has to play a real role to guarantee, promote and enforce these EU values. To achieve this aim, the different AI actors have a true social responsibility on this matter. 

The European Parliament stresses that any future regulation should follow a differentiated and future oriented risk-based approach to regulating AI, robotics and related technologies. It is essential to determine where the AI should be considered high-risk in relation to the general principles mentioned above. The Parliament is however aware that this risk-based approach should be developed in a way that limits the administrative burden for companies (for example, by using the impact Assessment provided for in the GDPR).

To respond to (and minimise) these risks and to increase public (and consumers) trust in AI, it is necessary to implement security, transparency and accountability features. How AI is developed and used must also not lead to direct or indirect discrimination: the future regulation has to provide effective remedies against all forms of inequality.

The European texts cover other essential subjects that the development of AI, as well as the implementation of its legal framework, must take into account: 

  1. AI must have an impact on environment and sustainability by contributing to the achievement of sustainable development, the preservation of the environment, climate neutrality and circular economy goals;
  2. Questions on privacy and biometric recognition are essential. AI is functioning by using among others (personal) data. In this regard, the GDPR and all other regulation on privacy and personal or non-personal data has to be respected. This is essential for consumer trust in the AI technologies; 
  3. AI has to be developed by respecting the principle of good governance;
  4. The technologies and products generated by AI have to respect the EU internal market rules. The protection of consumers is essential, especially by the creation or the application of liability regimes (on this subject, see the other blog, here). AI can be positive for the consumers only if its development is really integrated in the EU legal framework, including a high degree of protection;
  5. The EU security and defence policy respects the EU values. Consequently, if the AI is used in this policy, AI has to respect to same values. The European Parliament calls also for increased investment in European AI for defence and in the critical infrastructure that sustains it; 
  6. AI has already (and will have) an impact on the development on autonomous transport. It is necessary to adapt the EU legal framework so as not to hinder these technologies;
  7. AI could have substantial impact on employment, rights of workers, and on digital skills and the workplace. However, for this to be positive, the world of work needs to be trained in these technologies, the resulting gains of the AI need to be shared, a system of accountability needs to be put in place, etc. The transition to the digital economy must be just;
  8. AI is an opportunity to strengthen education and access to culture with impacts on media, youth, research, sports and the cultural and creative sectors.

The European Parliament “notes the added value of having designated national supervisory authorities in each Member State, responsible for ensuring, assessing and monitoring compliance with legal obligations and ethical principles for the development, deployment and use of high-risk artificial intelligence, robotics and related technologies, thus contributing to the legal and ethical compliance of these technologies”. Of course, these national authorities should not stand in the way of real coordination at EU level by creating, among other things, a European certificate of ethical compliance. The grant of this certificate should attest the respect of the ethical requirements summarized in the commented report and resolution of the European Parliament.

Conclusion

The report and resolution cover many different fields: what they have in common is the respect of an ethical dimension by AI and robotics. It is clear that these texts raise many questions. For example, who is responsible for ensuring compliance with legislation (the AI developer and/or the AI operator)? At GDPR level, who is the controller? Are the data processed by an AI personal data? When is AI technology potentially discriminatory? What legislation should be applied when AI affects several different areas? Our specialists in many legal fields can work hand in hand with you from the development of the AI system, until it is eventually operated and used, in order to help you translating all legal rules and requirements into the design of the AI system. 

This article was co-authored by Edouard Cruysmans in his capacity of Professional Support Lawyer at Stibbe.