Legal Considerations for Artificial Intelligence in the Life Sciences Sector

Article
EU Law

This article is the second part in our series on AI and the AI Act. In this series, we delve deeper into specific aspects of the AI Act and their interaction with other rules, sectors and practices.

As with many sectors, AI has profound implications for the life sciences sector, with applications ranging from drug discovery and diagnostics to personalized medicine and clinical trials. While AI promises efficiency, innovation, and improved patient outcomes, it also raises complex legal questions given the specific nature of the sector. 

Following our first blog regarding the current status of the AI act, this second blog further explores the evolving legal landscape governing AI in life sciences, with a focus on medical device regulation, data protection and intellectual property.

Life sciences, the AI Act and Medical Device Regulation

As discussed in the first part of our series, the EU Artificial Intelligence Act (Regulation 2024/1689) (AI Act) proposes a risk-based framework for AI. The AI Act applies horizontally across all sectors and therefore equally applies to AI systems used in the life sciences sector. 

The AI Act generally distinguishes four categories of AI systems and models:

  • AI systems used as part of prohibited AI practices;
  • High-risk AI systems;
  • Limited-risk AI systems which interact directly with individuals or are capable of generating realistic content;
  • General purpose AI-models. 

Under the AI Act, various AI systems used in the life sciences sector will be classified as “high-risk”. These include AI systems that are, or are safety components of, medical devices already subject to the Medical Device Regulation (Regulation 2017/745) (“MDR”) and the In Vitro Diagnostic Medical Device Regulation (Regulation 2017/746) (“IVDMR”). The MDR applies to devices (including stand-alone software) that are intended for medical purposes such as diagnosis, prevention, or treatment. An AI system qualifies as a medical device, or as a part thereof, if it serves a medical purpose on its own and is intended for use with individual patients. The MDR and the IVDMR distinguish various classes of devices, some of which are subject to a third-party conformity assessment. Where medical devices are required to undergo such third-party conformity assessment, they fall under the scope of the AI Act if they incorporate an AI system. For example, external hearing aids or remote monitoring devices for active implantable devices which include an AI system will be classified as “high-risk” and will therefore be subject to the stringent obligations under the AI Act.

The AI Act does not duplicate the obligations already included in the MDR and the IVDR, but rather provides requirements complementing the obligations under the MDR and the IVDR. For example, the AI Act does not introduce a separate or parallel conformity assessment process for such systems. Instead, it mandates that AI-specific requirements—such as those related to data governance, transparency, and human oversight—be addressed within the framework of existing MDR/IVDR procedures. This ensures that manufacturers are not subject to conflicting or redundant obligations. 

Apart from AI systems as medical devices, other AI systems used in the life sciences sector may also fall within the scope of the AI Act. AI systems intended to be used for biometric categorisation could equally qualify as “high-risk”. This may for example include access systems in hospitals based on facial recognition. In addition, as in other sectors, the use of chatbots based on AI in the life sciences sector will be subject to the rules regarding transparency for AI systems which interact directly with individuals or are capable of generating realistic content. 

Under the AI Act, the primary responsibility for compliance lies with the provider of the AI system. The AI Act specifies that, in the case of high-risk AI systems that are safety components of medical devices, the product manufacturer shall be considered the provider of the high-risk AI system. However, the AI Act also imposes certain obligations on users, referred to as "deployers". In the case of high-risk AI systems in the life sciences sector, users—such as healthcare professionals, hospitals, medical practitioners or pharmaceutical researchers —must use the system in accordance with the provider’s instructions, monitor its operation, and report any serious incidents or malfunctions. These users must therefore remain vigilant when deploying and using the system and ensure compliance with their respective operational and reporting duties under the AI Act.

Data Protection Considerations

AI systems in life sciences typically rely on vast datasets, including health and genetic data. This may include: 

  • Direct sensitive personal data (e.g. medical history, test results, treatments, disabilities, etc.);
  • Indirect sensitive personal data, i.e. personal data that seem a priori non-sensitive but that might imply sensitive personal data (e.g. location data relating to hospital visits or certain specific dietary restrictions).

Under the General Data Protection Regulation (“GDPR”), such types of personal data are qualified as special categories of personal data, the processing of which is in principle prohibited unless an exception under article 9.2 of the GDPR applies. 

In many cases, health data are processed for a primary purpose, such as the provision of medical treatment, and organisations often seek to repurpose such data for a secondary purpose, e.g. the training or use of an AI model for pharmaceutical research. When contemplating the processing of data for other purposes than the ones for which the data were originally collected, the GDPR installs a compatibility assessment. Only if the new purpose is deemed ‘compatible’ with the original purpose, no separate legal basis is required. The closer the new purpose approximates the initial purpose, the more likely it is deemed compatible. The reasonable expectations of the data subjects concerned should also be considered as part of this assessment. 

“Scientific research” however is granted a special status, as the GDPR positions as a general rule that further processing for scientific research purposes is not deemed incompatible with the original purpose. In this regard, the European Data Protection Supervisor (EDPS) distinguishes ‘genuine research’ that aims to expand society’s collective knowledge and wellbeing from research that primarily serves private or commercial ends. According to the EDPS, only genuine research benefits from the aforementioned special status. This distinction is still subject to discussions on where to draw the line in medical and pharmaceutical research serving commercial ends. The training of an AI-model for a number of purposes, some of which rather commercially driven, may therefore not benefit from the compatibility-exception for research. In any event, data controllers may also rely on a separate legal basis for a new purpose, such as the data subject’s consent, the necessity for the purpose of a legitimate interest of the data controller or a third party, or the necessity for the performance of an agreement with the data subject.

Data controllers should also ensure compliance with the other general data protection principles when processing (sensitive) data as part of the use or training of AI systems. This includes implementing privacy-enhancing techniques, such as pseudonymizing – or even anonymizing – personal data prior to importing the dataset for training AI models. In addition, appropriate technical and organisational security measures should be taken to adequately protect personal data.

Intellectual Property Challenges

The invention of AI applications in life sciences often requires substantial investments. Inventors might feel the need to secure their investments by applying for a patent to obtain an exclusive right to commercialize.

European patent law, generally governed by the European Patent Convention (EPC), allows for the protection of (AI-related) inventions provided they meet the criteria of novelty, inventive step, and industrial applicability. However, the EPC excludes mathematical methods and computer programs “as such” from patentability, unless they contribute a “technical effect”. The EPO examines this criterion on a case-by-case basis. For example, a technical effect is established in case of an AI system improving the control of industrial hardware or allowing a more secure way of handling data. Therefore, where the AI-system provides a technical solution to a technical problem, it may be eligible to be patented. This is for example the case for a system for measuring blood glucose variability based on an AI-application. On the contrary, the patenting of AI-systems performing purely non-technical tasks, such as AI systems merely improving aesthetics, is not accepted. Nonetheless, the line between patentable subject matter and unpatentable abstract algorithms remains blurred, particularly in AI-driven drug discovery.

In addition, questions arise regarding the patentability of substances or treatments developed by AI without direct human input. The European Patent Office (EPO) currently requires a human inventor, complicating the protection of AI-generated inventions. 

Another complexity relates to the requirement of disclosure when applying for a patent. The boundaries of the scope of disclosure are determined by the fictional notion of the person skilled in the art (the “PSA”). The PSA is described as “a skilled practitioner in the relevant field of technology who is possessed of average knowledge and ability and is aware of what was common general knowledge in the art at the relevant date”. Disclosure of the invention is sufficient when it enables the PSA to carry out the invention him or herself. On several occasions, the EPO has already indicated that this would also require the disclosure of the training data, oftentimes highly sensitive or commercially valuable information. For example, in a case where the applicant filed an invention related to a method for determining the volume of blood pumped by the heart per unit at a time, the Technical Board of Appeal of the EPO ruled that the training data set to develop the neural network had to be disclosed. 

Entities in the life sciences sector should therefore carefully consider the optimal strategy to protect their investments when developing or using AI. 

Conclusion and Future Outlook

As AI continues to reshape the life sciences sector, the interplay between innovation and regulation becomes increasingly complex. The AI Act introduces a sector-agnostic but risk-based approach that directly affects life sciences applications, particularly in the realm of medical devices. At the same time, organisations must navigate overlapping legal frameworks such as the GDPR and intellectual property law, each presenting its own set of compliance challenges and interpretative uncertainties.

In our upcoming articles, we will further explore the intersections between the AI Act and other sectors and regulations.