Please find our earlier blogposts on the AI Act here and here.
Generative AI from the original proposal to today
The original proposal from the European Commission for the AI Act did not contain any references to general purpose AI or generative AI. The proposal was purpose-driven: certain AI systems were classified based on the purposes for which they were built, such as employment, migration, law enforcement, migration, democratic process, etc. These purposes determine whether an AI system should be classified as high-risk AI, or perhaps even prohibited AI. This is why there was little to no certainty regarding the applicable rules on cross-purpose or general purpose AI.
When OpenAI took the world by storm late November with its LLM-based chatbot ChatGPT, the Council of the EU approved last-minute amendments to the AI Act version of 6 December 2022 to include a title on general purpose AI systems. It created a separate regime for general purpose AI, exempting them from the majority of the rules in the AI Act while relying on the European Commission's implementing acts to subject certain general purpose AI systems to the rules on high risk AI after consultations and a detailed impact assessment.
In May 2023, two parliamentary committees voted and adopted a draft negotiating mandate, which has been approved today by the plenary. This latest version of the AI Act uses a tiered approach with three key terms: “general purpose AI", “foundation models" and “generative AI". General purpose AI is broader than the latter two, but not all foundation models and generative AI systems fall under the term general purpose AI.
Generative AI is a subset of a foundation model, as the generative AI system is the application (such as ChatGPT) built on top of the foundation model (such as GPT-3.5). The obligations on generative AI mainly concern transparency, so that users always know when the content they see or hear is AI-generated. The rules for foundation models are wider in scope, which is clear from this provision of the AI act: providers of foundation models have to “demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development with appropriate methods such as with the involvement of independent experts, as well as the documentation of remaining non-mitigable risks after development;" Obligations such as these will put a considerable burden on the developing team to translate these broad goals into concrete guardrails.
As the AI Act will further evolve throughout the inter-institutional negotiations, we might see the EU institutions adopting changes responding to new technologies and new evolutions, as it did with ChatGPT. This begs the question how technology-neutral this legislation can remain. Other hot topics will be the exemptions (or loopholes) for research purposes, rules on biometric surveillance and the degree of redress for users and consumers. It will also be interesting to see the AI Act's interaction with other EU legislation such as the Digital Services Act, the Digital Markets Act and the Copyright in the Digital Single Market Directive.
Not only the development but also the use of an AI system in your business requires legal assistance from many perspectives. As an independent full-service law firm, Stibbe's specialists from a broad range of practice groups such as TMT, IP, finance, M&A, employment, environment, ESG, competition and public law are perfectly placed to assist you and your business in complying with the Artificial Intelligence Act.