Bildnachweis: Taylor Wessing, VentureCapital Magazin.
When using AI in clinical trials, stakeholders point to the challenges associated with the use of AI in clinical trials, amongst others the need to ensure transparency by tracking the AI logic which often appears as a ‘black box’, and to reduce bias and ensure fairness of the output of AI processes. The quality of input/training data is also a major challenge. Real-world data is often not validated and difficult to access due to a lack of interoperability and standardization of databases.
As yet, specific legislation or regulatory guidance for the use of AI in clinical trials is very limited. Instead, the general legal framework and guidance for software, AI, data protection, data security, Good Clinical Practice (GCP) and, if applicable, medical devices, sets the rules for such use.

Lack of specific legislation and guidance
AI systems used for conducting clinical trials of medicinal products are subject to the provisions of Regulation (EU) 2024/1689 on artificial intelligence (‘AI Act’), unless they are specifically developed and put into service for the sole purpose of scientific research and development’, Art. 2(6) AI Act. Under the AI Act, AI systems are allocated to four different risk classes, ranging from ‘no or minimal risk’, ‘limited risk’, ‘high risk’ and ‘unacceptable risk’, with different rules and requirements associated which each risk class. AI systems used for medical purposes such as patient selection, dosing of the investigational medicinal product or clinical monitoring during the trial are considered ‘high risk’ and are generally subject to the obligation to affix a CE marking based on a full conformity assessment. At the same time, AI systems used for medical purposes are governed by Regulation (EU) 2017/745 on medical devices (‘MDR’) or Regulation (EU) 2017/746 on in-vitro diagnostic medical devices (‘IVDR’), which apply alongside the AI Act. Further, the ICH E6 GCP guideline applies, amongst others chapter 3.16 on data and records and chapter 4.3 on computerized systems. This means that the handling of data must, at each stage of data processing, conform to the principles of data integrity (including physical integrity and
coherence), quality control, and validation to ensure completeness, accuracy, and reliability of the clinical data generated in the trial. The draft version of ICH E6(3), Annex 2, published on 20 November 2024, contains GCP principles for specific aspects of clinical trials relevant for the use of AI models, such as the handling of real-world data relating to patient health status collected from a variety of sources outside of clinical trials.
EMA reflection paper
In its ‘Reflection paper on the use of Artificial Intelligence (AI) in the medicinal product cycle’ of 9 September 2024, EMA confirms that the principles of GCP and statistical evaluation apply, and defines additional requirements for data and information to be collected and included in the clinical trial protocol in relation to AI used in the trial:
- The level of GCP requirements applicable to the use of AI depends on the risk involved with such use. If an error can have an impact on patient safety (‘high patient risk’) or substantial regulatory consequences, e.g. affecting the primary endpoint of a late-stage clinical trial (‘high regulatory impact’), the EMA requires a detailed description of the model architecture, the logs from model development, training data, and description of the pre-processing pipeline as part of the clinical trial data.
- AI models used for transformation, analysis, or interpretation of clinical data should comply with ICH E9 and other relevant guidelines on statistical principles for clinical trials. Any models or estimates based on early-phase clinical trial data that are used for planning subsequent clinical trials must be statistically robust.
- While EMA considers AI models used in early clinical phases as ‘often low risk’, this is not always the case, e.g. if AI models are used for the assignment of patients to a specific treatment or for decisions on dosing.
- AI models used in late-stage (pivotal) clinical trials are typically considered ‘high risk’ due to their immediate regulatory impact. For those AI models, EMA requires that the algorithm version used in the trial be locked before being used, avoiding incremental learning and any modifications during the trial.
Interplay with MDR/IVDR and AI act
If an AI software is used, for example, for diagnostic purposes to find out whether a patient can benefit from treatment with the investigationalmedicinal product, the AI is a high-risk AI under Art. 6 (1) AI Act. At the same time, the software is, by definition, an IVD falling within the scope of the IVDR. Very often, the AI-IVD has been developed alongside the medicinal product and is clinically tested simultaneously with that product. Although the testing may be carried out in one single study, three different sets of rules apply:
- the provisions of Clinical Trials Regulation (EU) 536/2014 for the medicinal product,
- the provisions on performance testing of IVD under the IVDR, and
- Art. 60 et seqq. AI Act on real-word condition testing.
We expect further clarification by the EU legislator and relevant guidance to simplify regulatory procedures and help pharmaceutical companies as well as AI-IVD manufacturers navigate the complex regulatory landscape while safeguarding patient safety, patients’ rights, and reliability of clinical/RWC testing data.
About the authors:
Dr Manja Epping, partner at Taylor Wessing, advises companies in the life sciences sector on intellectual property and regulatory issues, in particular on the drafting and negotiation of research and development collaborations, manufacturing and distribution agreements, licence agreements and transactional advice.
Dr Stefanie Greifeneder, partner at Taylor Wessing, is an expert in regulatory, commercial, and contract law issues in the life sciences sector. She advises on all regulatory and contractual issues in M&A and private equity transactions.