Law and AI (part 1): towards a European civil liability regime?

Article
EU Law

On 20 October 2020, the European Parliamentary Assembly adopted, on the basis of three reports, three resolutions on AI from three different perspectives. These resolutions have been published in the Official Journal on 6 October 2021. The objective of the three reports and the three resolutions was to influence the Commission legislative proposal on AI. In three different blogs, we outline the key suggestions proposed by the European parliament. In this one, we discuss the report and the resolution on a civil liability regime for AI.

Introduction

The European Commission published on 24 April 2021 a proposal for a regulation “laying down harmonised rules on artificial intelligence”. 

However, the process began two years ago. 

On 10 April 2018, during the Digital Day, 25 European countries signed up to cooperate on this matter by a common Declaration of cooperation on Artificial Intelligence. On 25 April 2018, the Commission issued a Communication called “AI for Europe”.  In June 2018, a High-Level Expert Group on Artificial Intelligence, a group of 52 experts to support the implementation of the European Strategy on Artificial Intelligence, presented their Policy and Investment Recommendations for Trustworthy AI. The Commission issued on 8 April 2019 a Communication to build trust in human centric AI and, on 19 February 2020, a white paper called “White Paper on Artificial Intelligence: a European approach to excellence and trust” and a report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics. At the beginning of October 2020, the Committee on Legal Affairs of the European Parliament adopted three different reports on AI from three different perspectives. Then, on 20 October 2020, the European Parliamentary Assembly adopted, on the basis of these three reports, three resolutions on AI from these same perspectives. These resolutions have recently (on 6 October 2021) been published in the Official Journal.

The objective of the three reports and the three resolutions was to influence the Commission legislative proposal on AI. In three different blogs, we outline the key suggestions proposed by the European parliament. In this one, we discuss the report and the resolution on a civil liability regime for AI (see here). The two other blogs are available here and here.

What is the content of these two texts on an AI liability regime?

The report and the resolution do not propose a legal revolution: the European Parliament underlines that there is no need for a complete revision of the existing EU liability regimes. It considers that the Product Liability Directive, adopted in 1985, is still relevant. The Directive should be used for civil liability claims against producers of defective AI-systems (where the latter are qualified as products under the Directive). However, the Directive needs to be revised to better include the digital technologies products, and in particular those generated by AI. In this respect, the Parliament urges to clarify some concepts of the Directive (e.g., “products”, “damage” or “producer”) and to consider reversing the rules governing the burden of proof for harm caused by digital technologies. It also asks the Commission to assess whether the Directive should be transformed into a regulation: this recommendation clearly reflects the will of the European Parliament to strengthen the legislative framework for liability in the EU, moving from harmonization (Directive) to standardisation (Regulation).

However, the European Parliament is aware that the 1995 text is not (and will not be) sufficient. It considers “that the existing fault-based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third party like a hacker or for persons whose property is damaged by such a third party, as the interference regularly constitutes a fault-based action”. The addition of liability rules seems necessary when claims are directed against operators of an AI-system. 

In the European Parliament’s view, this new liability regime should cover both: 

  1. front-end operator, a natural or legal person who (i) exercises a degree of control over a risk connected with the operation and functioning of the AI-system and (ii) benefits from its operation, and
  2. back-end operator, a natural or legal person who defines the features of the technology, provides data and essential back-end support service and therefore also exercises a degree of control over the risk connected with the operation and functioning of the AI-system.

The report and the resolution still consider that not all AI-systems present the same risks and do not all pose the same threats to the general public. The high risks autonomous AI-systems should be subject to a new common strict liability regime. The risk-based approach should encompass several levels of risk based on clear criteria. The European Parliament recommends that all high-risk AI-systems be exhaustively listed in an Annex to the proposed Regulation (that could be easily modifiable taking into account the rapid technological developments). What is not listed in the Annex should remain subject to the Member States fault-based liability with a presumption of fault on the part of the AI operator, who should be able to exempt itself by proving that it has abided by its duty of care.

The proposed Regulation should cover both economic and non-economic damage caused by AI-systems by implementing a compensation scheme. It is also recommended that the European Commission evaluates the need to include in the European contract regulation a principle to avoid contractual non-liability clauses, including in B2B relations.

Finally, the European Parliament states that all operators of high-risk AI-systems listed in the Annex should mandatorily hold liability insurance to cover this risk.

Conclusion

In practice, developing AI includes risks. The European Union wishes to regulate them in order to prevent the AI users from being deprived of any action and means of obtaining compensation for damage. 

From the point of view of companies active in the AI field, it is obviously essential that they develop their technologies with full knowledge of these risks and this potential liability issues. It seems clear that it will be difficult to navigate through the three parallel liability regimes that the proposed Regulation seems to imply. Firstly, the classical Member States fault-based liability (that could present differences from Member State to Member State). Secondly, the Product Liability Directive regime. Thirdly, the new regime for operators of AI-systems. Companies will have to juggle between these regimes with complex practical applications. Not to mention the many areas of uncertainty that still exist: will the recommendations be followed? Will the 1985 Directive be amended? Will the new regime come into being? It is still too early to give clear answers. Business to follow... but let's be ready!

This article was co-authored by Edouard Cruysmans in his capacity of Professional Support Lawyer at Stibbe.