The Guidelines for providers of General Purpose AI Models are here: the 10^23 FLOPS question?
On Friday 18 July, shortly after the publication of the GPAI Code of Practice, the European Commission (the “Commission”) published its guidelines for providers of General Purpose AI or “GPAI” models (the “Guidelines”). The Guidelines will help providers of GPAI models determine how to abide by the next iteration of rules of the AI Act which will become applicable from 2 August 2025.
In this blog, we will discuss a few interesting points of the Guidelines, such as the criteria to be considered a GPAI model and GPAI models with systemic risk, the exemption for open source GPAI models, and its repercussions for GPAI model providers. Finally, we discuss the Guidelines’ guidance on the GPAI Code of Practice.
For more details on the GPAI Code of Practice, see our blog: ‘‘EU’s GPAI Code of Practice: the world’s first guidance for General Purpose AI model compliance”.
What is a general purpose AI model?
A GPAI model is defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market” (emphasis added). We distinguish three main criteria:
(i) the model is trained on a large amount of data;
(ii) it displays significant generality; and
(iii) it is capable of competently performing a wide range of distinct tasks.
While the Guidelines provide more insight on criterion (i), it unfortunately does not bring much clarity regarding criteria (ii) and (iii), i.e., what qualifies as “significant amount of generality” or “being able to competently perform a wide range of distinct tasks”.
The Commission strives to provide a straightforward, easily verifiable criterion rather than attempt to enumerate every capability and task a model might possess – especially given the wide variety of capabilities and uses for GPAI models. Therefore, the Guidelines provide an objective threshold for criterion (i), i.e. when has a model been trained on a “large amount of data” and provides non-limitative examples on criterion (iii), i.e. what sort of AI models are capable of performing a wide range of distinct tasks.
For the first criterion, the Commission has chosen to pinpoint an objective (albeit – in my opinion – a bit arbitrary) criterion, being the amount of computational resources used to train the GPAI model, measured in “Floating-point Operations Per Second” (“FLOPS”). FLOPS express the amount of computing power a computer or processor can use per second measured in floating point operations, a simplified method of representing large numbers by expressing that number as an approximation.To classify as GPAI model, the training data set of the AI model must exceed 10^23 FLOPS. This equates to the size of the training data sets of the largest known GPAI models currently on the market, such as GPT-3 and GPT-4, Gemini Ultra, and Llama 3. The latter three are estimated to reach the higher threshold of 10^25 FLOPS, placing them in the “GPAI models with systemic risk” category.
For the second and third criteria, the Guidelines provide a general indication for when an AI model qualifies as a GPAI model, namely, whether it can generate language, text-to-image or text-to-video. These modalities are capable of generating variable of performing a wide range of distinct tasks. However, it must displays significant generality and therefore not be limited in its tasks.
10^23 FLOPS and generating variable output: simple, right?
As we have concluded above, an AI model can be considered a GPAI model if (i) ) it its training data set is over 10^23 FLOPs; and (ii) it is capable of generating variable output, be it text or audio output, text-to-image, or text-to-video;. Simple, right?
Unfortunately, it isn’t as simple as it sounds. If an AI model can generate for example text or speech, but only performs a limited amount of tasks in which it specializes, the Commission does not consider it a GPAI model. This can open up a lot of discussion regarding what qualifies as a “significant amount of generality” or “being able to competently perform a wide range of distinct tasks”. Say, for example, that an AI model is capable of transcribing an audio file to text. Under the Guidelines, this would not make it a model with a significant amount of generality in and of itself. At what stage does it turn into a sufficient amount of generality?
A. If the model is capable of transcribing an audio file into text, and then translating it?
B. If the model is capable of transcribing an audio file into text, translating it from Dutch into English, and then transforming it into an English audio file? Or
C. If the model is capable of transcribing an audio file into text, translating it from Dutch into English, and then transforming it into an English video file, generating a clip of a famous British TV presenter narrating the original Dutch speech in English?
Rather than just the objective criterion of the amount of computing power used to train a specific AI model, the question of whether that model qualifies as GPAI, drills down to an assessment of whether that model is sufficiently general and versatile. A large language model (“LLM”) will typically fall under the scope of a “general purpose” AI model but it may escape applicability if it for example is trained on very specific use cases rather than being a multi-purpose model. Ultimately, we expect that this will boil down to a case-by-case assessment.
From the big to the even bigger: 10^25 FLOPS General Purpose AI Models with systemic risk
The Guidelines also deal with GPAI models with systemic risk. A GPAI model is classified as having a systemic risk if it has high-impact capabilities, i.e., “capabilities that match or exceed those recorded in the most advance models”. A GPAI model can also be qualified as a systemic risk by the Commission or the Scientific Commission, as will be discussed further below. The question regarding high-impact capabilities again aims to be an objective criterion, but may miss the mark given its ambiguity. In the Guidelines, the Commission has indicated that this criterion may be further developed through delegated acts.
In absence of another objective criterion, the AI Act further specifies that a GPAI model is presumed to have high-impact capabilities if the computational resources used to train its data set are greater than 10^25 FLOPS. This measurement may be updated over time, as this threshold will inevitably become easier to reach, akin to Moore’s law (“The number of transistors in a computer chip will double each two years”). Given the technological advancements in computing power, the existing threshold which at this time seems to be a mountain may therefore turn into a molehill. It is difficult to draft legislation which is both technology agnostic and can evolve over time to govern the innovation of a technological field, and in the case of the AI Act may require indexation of the FLOPS requirement over time.
Designation as a GPAI model with systemic risk and reversal of the burden of proof
There are three ways in which a GPAI model can be designated as having a systemic risk: through a self-imposed notification to the Commission; through an ex officio action of the Commission; or following a qualified alert by the scientific panel of the Commission’s AI Office, for which the Commission has started recruiting recently.
The AI Act mandates that providers of GPAI models which qualify as having systemic risk will notify the Commission of their self-designation without undue delay and in any event within two weeks of an event of qualification, e.g. if during training the computing threshold of 10^25 FLOPS is reached or will be reached shortly.
Along with this self-imposed notification, an alleged GPAI model provider may also lodge arguments why its model would not qualify as having a systemic risk, despite reaching the threshold under which that model is presumed to have a systemic risk. The onus of proof to contest the presumption that a GPAI model constitutes a systemic risk therefore lies with the GPAI model provider, rather than with the Commission. The Guidelines further provide that such arguments do not suspend the requirements that the AI Act imposes on GPAI model providers, and providers will therefore need to comply with the AI Act despite the review process being started.
In case a GPAI model provider does not choose the self-identification route, it runs the risk of the Commission issuing an ex officio action designating the GPAI model as having a systemic risk. From the moment of designation onwards, the GPAI model provider will need to comply. It may therefore be wise to prepare for compliance with the systemic risk requirements even if you are not entirely sure whether you will exceed the threshold of 10^25 FLOPS, as a sudden designation can entail stringent requirements.
The open-source escape valve: exemptions for true open-source GPAI models
Is there no way to escape the regulatory restrictions imposed by the AI Act? Yes, there is one: the exemption for “true” open-source GPAI models. To encourage the free and unconditional distribution and usage of GPAI models, the AI Act entitles providers of GPAI models who release their models on an open-source basis to exemptions from compliance requirements.
However, there are exceptions to the exemption rule: GPAI models which have a systemic risk cannot benefit from the open-source exemptions. The Guidelines further provide strict requirements for a GPAI model to be considered open-source: a user of the GPAI model must be able to freely use, modify, and distribute the GPAI model without payment requirements and subject to limited conditions such as crediting the original author(s). Examples of conditions which are not permitted include limitations to non-commercial or research use; prohibitions of further distribution of the GPAI model; usage restrictions such as volume of users and requirements to obtain a separate commercial license for specific use cases. The Guidelines therefore restrict copyleft-like license structures for GPAI models.
The GPAI Code of Practice: voluntary, yet strongly encouraged
The Guidelines also deal with the legal effect of the GPAI Code of Practice and the approval thereof by the Commission. Even though the GPAI Code of Practice is a voluntary instrument that may be used by GPAI model providers to demonstrate compliance, the Commission strongly encourages its unconditional and full uptake in practice. In the Guidelines, the Commission has stated that it is not permitted to opt-out from any specific aspects of the Code of Practice and that while providers of GPAI models may also proof their compliance with the AI Act in different ways, it considers adherence to a code of practice that is assessed as adequate as “a straightforward way of demonstrating compliance” which will streamline the Commission’s enforcement activities and will enable “increased trust from the Commission and other stakeholders”.
Parties that do not sign the Code of Practice on the other hand will face increased regulatory inquiries and may be required to provide gap analyses comparing their compliance framework with the measures included in the Code of Practice. The Commission therefore has indicated in no uncertain terms that it would be in everyone’s best interest, including the GPAI model providers, to sign the GPAI Code of Practice.
Today, on 1 August 2025, the Commission will publish the signatories to date of the Code of Practice on its website. The score thus far, since the publication of the Code of Practice? While some big AI companies have already indicated that they will sign (including Google, OpenAI, Mistral AI, and Anthropic) and others have indicated that they are still contemplating signing (including Microsoft), at least one very important AI model developer will not sign. On 18 July 2025, Meta’s Chief Global Affairs Officer Joel Kaplan has announced that Meta will decline to sign the Code of Practice as Meta considers it to be overreaching the scope of the AI Act. Whether the GPAI Code of Practice will become widely adopted by GPAI model providers will therefore remain to be seen.