Review Article

Ethical and Regulatory Frameworks for Artificial Intelligence in Clinical Research: A European Perspective on the Artificial Intelligence Act for Ethics Committees and Researchers

Register or Login to View PDF Permissions
Permissions× For commercial reprint enquiries please contact Springer Healthcare: ReprintsWarehouse@springernature.com.

For permissions and non-commercial reprint enquiries, please visit Copyright.com to start a request.

For author reprints, please email rob.barclay@radcliffe-group.com.
Information image
Average (ratings)
No ratings
Your rating

Abstract

The rapid integration of artificial intelligence (AI) into clinical research is transforming the landscape of biomedical innovation, influencing numerous phases of research, with critical ethical and legal implications. Regulation (EU) 2024/1689, commonly referred to as the AI Act and issued in 2024, introduced a new regulatory framework that classifies AI systems used in clinical settings as ‘high risk’, requiring increased scrutiny by ethics committees and national authorities. This review addresses ethical and regulatory challenges and discusses the application of the AI Act within real-world clinical research. We propose a three-phase lifecycle (training, real-world testing and post-marketing monitoring) to align regulatory burdens with AI maturity. Our recommendations include transparent protocol design with explicit data-use declarations, complementary application of the Medical Device Regulation and the AI Act, with particular attention to the early research phases. This approach provides practical indications for researchers and operational evaluation criteria for ethics committees to ensure patient safety while fostering trustworthy AI deployment in clinical trials.

Received:

Accepted:

Published online:

Disclosure: The authors have no conflicts of interest to declare.

Correspondence: Roberto Pini, CNR-IFAC, Via Madonna del Piano 10, 50019 Sesto Fiorentino, Florence, Italy. E: r.pini@ifac.cnr.it

Copyright:

© The Author(s). This work is open access and is licensed under CC-BY-NC 4.0. Users may copy, redistribute and make derivative works for non-commercial purposes, provided the original work is cited correctly.

The integration of artificial intelligence (AI) technologies into clinical research is proceeding at an unprecedented pace. As reported in the literature, the application of AI in the medical field is becoming increasingly broad, starting with oncology, which represents the prominent therapeutic area, followed by neurology and the cardiovascular field.1

From predictive analytics to diagnostic support, AI is increasingly being used at various stages of the clinical research process, including patient stratification, outcome prediction, image analysis and even protocol optimisation, therapy and surgery.2–5 This widespread adoption is evidenced by a rapidly growing scientific literature, reflecting a shift towards data-driven approaches in medical science.

Despite its potential, the use of AI in clinical research raises significant ethical and regulatory issues. The European regulatory system is very different from that of the US, where the Food and Drug Administration (FDA) is the central authority for the approval of medical devices (MD) that integrate AI and machine learning (ML) in different medical disciplines. The FDA has cleared around 100 AI- and ML-enabled software products in the cardiovascular field.6

In Europe, there is no central authority for these AI products as a decision-support system; however, AI-/ML-enabled devices can also be used in the context of clinical studies to generate evidence to support a marketing authorisation application. The European Medicines Agency, together with the European Commission and the national competent authorities of the Member States of the European Economic Area, coordinates the regulatory system for medicinal products and published a reflection paper on the use of AI in the lifecycle of medicinal products in September 2024.7

Ethics committees can play a central role in the regulatory process for the approval of clinical studies. However, a joint evaluation of clinical studies focusing on AI between ethics committees and competent authorities is currently missing. Furthermore, it should be noted that studies involving technologies that do not fall within the scope of medical devices or drugs do not have a clearly defined, uniform regulatory framework across countries, and the main requirements are related to good ethical practices and good clinical practice. This type of study often involves preliminary research into AI technologies that use human data. The lack of a clear regulatory framework has notable consequences: for researchers, it results in protocols that lack transparent information, such as model properties and the data used for training, testing and validation; for ethics committees, there are heterogeneous guidelines for research and evaluation criteria.

In this regard, ethics committees – traditionally responsible for protecting the rights and well-being of patients and clinical study participants in the evaluation process of clinical research using new drugs and medical devices, including software used as medical devices (SaMDs) – have recently had to face the additional complexity of evaluating studies involving AI systems under very different application conditions. Given their important role in advancing clinical research, which involves the provisions of binding opinion for the conduct of such studies, they have often had to deal with issues such as opaque algorithms, large-scale data processing, and automated decision-making systems.8–11 These developments are reshaping the landscape of ethical oversight.

In the case of ethics committees operating in the EU, this framework has recently been the subject of regulatory developments, with the issuance in August 2024 of Regulation (EU) 2024/1689, the so-called AI Act.12 This regulation classifies AI systems as high risk when used in the clinic as medical devices or as software integrated into existing medical devices. Therefore, in addition to the rules established for the medical device market in Europe (Regulation (EU) 2017/745 for medical devices13), there may be some additional provisions in the AI Act that need to be observed for AI systems. Moreover, the recent document of the Joint Artificial Intelligence Board and Medical Device Coordination Group provides useful guidance on the simultaneous and complementary application of the Medical Device Regulation (MDR)/In Vitro Diagnostic Medical Devices Regulation and AI Act to medical devices containing high-risk AI systems.14

This article aims to highlight and discuss some of the emerging critical issues in the design of AI-based clinical trials subject to ethical review. Moreover, it suggests recommendations regarding the main aspects that should be considered in the study protocol to counteract the opacity of AI models, particularly in the preliminary research phase of development of AI technologies that use human data. Drawing on recent literature, current regulations and practical experience, we explore the complex ethical challenges and considerations that arise when AI enters the field of clinical research, with particular attention to the rapidly evolving ethical and regulatory environment in which both clinical researchers and ethics committees must operate. Then, we discuss the application of the European AI Act within clinical studies. We propose a structured method for the classification of the development stage of AI systems, divided into training phases, real-world testing and post-marketing monitoring. We draw particular attention to the early training phase where the use of AI systems represents a preliminary phase of research, when they do not yet have a direct impact on patient safety or interfere with medical decisions. As this early research stage is not yet clearly regulated, we provide a list of key recommendations that can help researchers to better describe the framework of their studies and provide ethics committees with practical criteria for evaluation.

The Challenges Posed by Opaque Algorithms and Data Governance

One of the main challenges facing medical researchers preparing clinical trial protocols – and then ethics committees evaluating them – is the opacity of AI methods. This opacity is inherent in some aspects of ML, but in many cases also depends on factors that are generally not adequately considered in the study design. Added to this is the extreme dynamics of developing new methods and algorithms, which complicates the current picture. It may be helpful to summarise the main issues that contribute to this opacity, followed by some practical examples from real-world scenarios. According to our experience and the recent literature (e.g. Fernández15), three main opacity factors can be recognised (Table 1 ). These include:

  • the technical complexity of the AI model, with too many variables and parameters to be meaningfully interpreted by humans;
  • a lack of transparency on the part of developers or users regarding the functioning and design choices of the algorithms; and
  • reliance on historical data and implicit biases, which are difficult to detect without dedicated interpretability tools.

Table 1: The Main Causes of Algorithmic Opacity in Artificial Intelligence Models

Article image

In practical cases of clinical study protocols, this picture is much more complicated, for example when the aim of the clinical protocol is to validate a medical device + AI system in a real-world setting. In our experience, one of the crucial issues often not adequately considered concerns data governance regarding the origin, collection, testing and intended use of data. In this case, the manufacturer or promoter must ensure that the datasets are representative of the target population and, as far as possible, free of errors and complete. Healthcare data can originate from a multitude of sources: electronic health records, medical imaging systems (such as CT, MRI or ultrasound), laboratory tests, genomic and proteomic analyses, digital pathology, data from wearable sensors and even patient-reported outcomes collected through mobile apps or questionnaires. These datasets are often heterogeneous, spanning structured fields (e.g. vital signs, drug prescriptions), unstructured text (e.g. clinical notes) and high-dimensional data (e.g. imaging or multi-omics profiles).

The data collection methods vary depending on the context: retrospective mining of existing databases, prospective acquisition within clinical trials or passive collection via digital devices. Each method raises distinct challenges in terms of data quality, completeness, standardisation, and bias. In this regard, a key issue that researchers should address in study protocols is to always be very clear about what the data will be used for. Whether the goal is to train a new predictive model, to validate and test an existing algorithm on a new population, to identify previously unknown disease subtypes, or to support clinical decision-making, the quantity, characteristics and quality of the data must be appropriately aligned with the intended use of AI systems, which must be explicitly mentioned. Additionally, researchers must describe the data according to the aim of the study, clarifying whether it focuses on improving diagnostic accuracy, enhancing prognostic stratification, optimising treatment pathways or generating real-world evidence.

These requirements respond not only to a methodological necessity but also to an ethical imperative: they ensure that data are used responsibly, that patients’ contributions are respected and that the risks associated with secondary data use (e.g. privacy breaches and misuse of sensitive information) are minimised through appropriate governance frameworks. As outlined in the “Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles” (August 2025), data collection protocols must ensure adequate representation of the target population (e.g. age, sex, race and ethnicity) in clinical studies and datasets.16 This promotes generalisable results, mitigates bias and helps evaluate model performance across diverse conditions, identifying potential limitations and ensuring safe, effective use in real-world settings.

EU Ethical and Regulatory Aspects for AI in Clinical Research

In general, EU ethical principles include dignity, self-determination, solidarity, precautionary principle, as well as necessity and proportionality. Specific principles about AI can also be identified, as outlined in 2019 in the “Ethical Guidelines for Trustworthy AI” drafted by the independent High-Level Expert Group on AI established by the EU Commission.17 They include human agency and control, technical robustness and safety, transparency, respect for fundamental rights and protection of personal data, social and environmental welfare and accountability.18

These principles represent the basis of the AI Act issued in 2024, the first comprehensive regulatory framework for AI within the EU, which classifies AI systems on their potential risks to users, leading to varying levels of regulatory oversight depending on the assessed risk. The AI Act has marked a groundbreaking advancement within the EU legal framework, as it has established rules for developing AI-based products through a risk-based approach.

Ethical Principles under the EU Legal System

EU law has “to strengthen the protection of fundamental rights in the light of changes in society, social progress and scientific and technological developments by making those rights more visible in a Charter.”19 The EU ethical principles that can be applied to the use of AI are briefly described below.20,21

Human rights protection and safeguarding personal data:

  • AI must respect human rights and fundamental interests under EU law, according to national constitutions, the 1950 Rome Convention and EU Charter.22,23
  • AI must uphold human dignity, which respects the core of human rights.24
  • AI must protect personal data and the rights of minors under 18 years of age. These systems must not undermine equality or allow discrimination.24,25

Human agency, empowerment and transparency:

  • AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy and function in a way that can be appropriately controlled and overseen by humans.12
  • AI systems must allow self-determination, where individuals have the right to be informed and consent to the processing of their personal data.19
  • Human agency is also connected to ‘transparency’, which means: “AI systems are developed and used in a way that allows appropriate traceability and explainability while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.”12

Technical robustness and the precautionary principle:

  • EU sources indicate that AI use poses risks related to cyber threats, personal safety (e.g. home appliances), and the dignity and rights of individuals. Thus, AI systems must be developed and deployed to ensure robustness against issues and resilience against unlawful alterations by third parties, minimising unintended harm.12

Social welfare, solidarity and proportionality:

  • AI systems are developed and used in a sustainable and environmentally friendly manner as well as to benefit all human beings while monitoring and assessing the long-term impacts on the individual, society and democracy.12
  • AI technologies must be used in a socially responsible way to seek solutions while promoting fundamental values and the rule of law.26
  • When using AI, vulnerable individuals should be protected.12
  • Social and sustainable function of AI also implies proportional use.

Accountability:

  • All actors are responsible for AI use from a civil, administrative, and criminal law perspective in the event of non-compliance with the regulations.
  • Communication mechanisms must ensure accountability for AI systems and their results, both pre- and post-deployment.27
  • It is essential to establish internal and external auditors, and evaluation reports are required to significantly enhance the technology’s trustworthiness.

The AI Act: A Comprehensive Regulatory Framework in a Diverse International Context

The international landscape of AI regulations in which the European AI Act was developed, discussed and issued has seen many other countries engaged in similar efforts to provide diverse approaches to governance. In the US, the complexity of federalism has made it challenging to implement a unified AI policy, with the situation being further complicated by the fact that the country is divided into states, each with its own government. Currently, there is no comprehensive AI Act. Nevertheless, the US has adopted an industry-specific regulatory framework through executive orders and agency guidance, with the FDA leading the oversight of AI in the medical field through its SaMD framework and AI-/ML-based Medical Device Guidelines. China has implemented comprehensive AI regulations with strict requirements for medical AI approval through the National Medical Products Administration. In July 2023, China issued the “Interim Measures for the Management of Generative Artificial Intelligence Services.”28 This measure defines safety and intellectual property protection standards for generative AI services and emphasises adherence to the socialist principles of the country. Japan has established AI governance principles emphasising a human-centric AI society while maintaining flexible regulatory approaches, and the Pharmaceuticals and Medical Devices Agency is developing specific pathways for AI medical devices. Moreover, in June 2025, Japan approved a law to promote AI research and development and proposed the establishment of a new international dialogue framework on AI regulation, highlighting the importance of global cooperation to address common challenges. Other nations are moving in the same direction of seeking regulatory harmonisation. South Korea is proposing an AI Act (expected in January 2026), which will combine seven existing AI laws and introduce ethical guidelines and the “AI Basic Act” enforcement decree.29

This international picture reflects common themes: risk-based approaches, sector-specific adaptations, an emphasis on transparency and explainability and the challenge of balancing innovation and safety, in the case of direct implications to patients. However, the diverse international situation has not yet provided common specialised regulatory pathways. In this context, the EU has focused its efforts to provide a comprehensive approach through its AI Act, which presently represents one of the most ambitious attempts at AI regulation worldwide.

The AI Act took effect on 1 August 2024 and will become fully applicable according to the following expected timeline: AI systems identified as unacceptable risks came into effect on 2 February 2025; codes of practice will be implemented 9 months after the regulation came into effect; regulations regarding general-purpose AI systems, which must meet transparency standards, will take effect 12 months after the regulation came into effect; on 2 August 2026, all remaining provisions will become applicable, except those specifically related to high-risk systems; and obligations concerning high-risk AI systems will apply 36 months after the regulation came into effect, providing a longer compliance period. Along with the implementation of these mandatory provisions and obligations on high-risk AI systems, we reasonably expect the issuance of specific guidelines for their applications in various fields, such as medicine.

At the time of writing this paper, the first provisions of the AI Act according to Article 113 entered into force (Chapters I and II) on 2 February 2025. Chapter I is related to the general matter of regulation, that is, improvement of the internal market and promotion of deployment of trustworthy and human-centric AI, while ensuring a high level of protection of health, safety and fundamental rights (Article 1), the scope of application (Article 2), the applicable definitions (Article 3) and the general AI literacy obligation of suppliers and deployers about their own personnel and anyone else involved in the operation of AI systems on their behalf (Article 4). The European Commission has provided guidelines specifying the practical implementation of Article 6 (i.e., the classification rules for high-risk AI systems) with a list of practical examples of use cases of high-risk AI systems and Chapter II outlines banned AI practices (Article 5), specifically prohibiting the marketing, use and deployment of specific AI systems deemed excessively hazardous, such as real-time biometric recognition. As a result, this regulation must be heeded not only by manufacturers and vendors of such AI systems mentioned in Article 5, but also by users of the listed AI systems.

Key Considerations in the Application of AI Systems within Clinical Studies

Defining the Applicable Regulatory Framework

Before delving into the regulatory landscape of clinical investigations, it is worth introducing some general considerations on how to define the applicable regulatory requirements. To accurately define the regulatory context in which a clinical study is conducted, regardless of whether it involves an AI component and alongside the objectives of the clinical study, it is essential to determine the product’s intended purpose, its ability to fulfil that purpose and whether the product is in a pre- or post-marketing phase.

The intended purpose (i.e. the use for which a device is intended by the manufacturer) is fundamental in identifying the applicable regulatory framework. A product must comply with any applicable EU regulation, directive, or local law aligned with this purpose.

However, if a product is not yet capable of fulfilling its intended purpose, the corresponding regulatory framework does not apply, because the goal of the investigation cannot be the evaluation of the safety and/or performance of the investigated product. This is especially relevant for AI systems, which, in early development, are typically still undergoing training using data collected through a clinical study. At this stage, regardless of the system’s future intended use, it does not yet meet its declared purpose. Therefore, as discussed in the previous sections, only general provisions (such as those related to data protection, ethical standards and good clinical practice) are applicable. However, even in the early stages of development, it would be useful for researchers to consider the mandatory aspects of later stages. In particular, they should highlight key determinants to ensure the system’s transparency, drawing inspiration from the requirements set out in the general-purpose AI code of practice.30

Another important distinction is whether the study occurs in the pre- or post-marketing phase. A post-marketing study gathers real-world data to support continued compliance and performance monitoring of a CE-marked product used according to its approved purpose. If used outside that purpose, it is considered pre-marketing. Conversely, a pre-marketing clinical study aims to generate evidence required to demonstrate conformity with applicable regulatory requirements before the product is placed on the market. In such cases, the investigational device must comply with all applicable requirements except those specifically addressed by the clinical study itself. Furthermore, all necessary precautions must be taken to ensure the health and safety of study participants.

Scope of the AI Act in Clinical Research Contexts

The aforementioned regulatory system has guided the proposal of the assessment method based on the classification of an AI system’s development stage, as outlined in the next section ‘Proposed Evaluation Method Based on the Development Stage of AI Systems’. From the legal point of view, according to Chapter I of the AI Act, it is possible to understand the point at which a transition occurs between research activities excluded from the application of the AI Act and when the regulation becomes operational in the context of clinical studies (general research without medical devices or medicinal products), clinical investigations (focused on medical devices) and clinical trials (focused on medicinal products).

The AI Act does not apply to the research, testing or development of AI systems before they are made available to the public or put into use. This includes research done not only by nonprofit organisations but also by private companies. In any case, such activities must still follow the existing EU laws. Regulation does not apply to AI systems or AI models, including their outputs, specifically developed and commissioned for the sole purpose of scientific research and development.12

However, testing in real-world situations is not included in this exception. The specific moment in which the research activity falls under the AI Act is when it involves “testing in real world conditions,” which “means the temporary testing of an AI system for its intended purpose in real-world conditions outside a laboratory or otherwise simulated environment, with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of this Regulation and it does not qualify as placing the AI system on the market or putting it into service.”12 This means temporarily testing an AI system for its planned use outside a laboratory or simulated environment to gather reliable data and check that the system meets Regulation requirements.

Thus, the AI Act applies when the evaluation of clinical trials or clinical investigations is considered part of the pre-marketing phase and because a medical device is classified as a high-risk AI system.

In the case of a study of a medical device for evaluating the performance of a specific AI system and validating an AI model for diagnostic and/or prognostic purposes, the applicable regulatory framework necessarily includes at least the MDR and the AI Act.19 The provisions imposed by both must be considered by the sponsor for setting up the investigation, in addition to medical device development, as well as by the ethics committee and the competent authority for study assessment. Of note, a SaMD is a specific type of medical device, but this does not alter the approach. Furthermore, the applicable regulatory framework may include additional regulations or directives, depending on the specific device and its intended purpose. Evaluations of clinical trials or clinical investigations that involve only research, testing, and development activities related to AI systems or AI models must always adhere to the general principles of trustworthy AI, even if the AI Act is not applicable. In addition, it is essential to respect the fundamental rights of research participants before these systems or models are placed on the market or put into service.27 This means that, according to the European legal sources mentioned above, to avoid the risks of AI and increase its benefits, AI must be lawful, it must comply with the applicable laws and regulations, it must also be ‘ethical’ in the sense that it must not undermine interests and values protected by the legal system and therefore respect an ‘ethical framework’, as the European Parliament refers to it.31,32 This emphasis on ethics is important because the technology naturally raises new legal and ethical questions.25 The ethical dimension of AI is not a luxury feature or an add-on: it needs to be an integral part of AI development. For a summary of AI Act application fields, see Table 2.

Table 2: Summary of the Application Fields of the Artificial Intelligence Act in Clinical Research

Article image

Proposed Evaluation Method Based on the Development Stage of AI Systems

Ethics committees are currently evaluating the experimental use of AI systems according to general aspects such as the scientific validity and the proposed methodology, among others. Currently, there are no universally recognised guidelines or standards for medical use of AI systems. Hence, researchers proposing clinical studies should justify their approach by providing a clear objective, a solid scientific basis, background and documentation on how AI can promote research and clinical benefits, along with why it is appropriate for the specific purposes of the research. A particularly critical situation often occurs in clinical studies where researchers are using AI for the first time and fail to clarify that the experimental use of AI represents only a preliminary phase of research, with the sole objective being the training and validation of the AI system. Specifically, as discussed in the previous section ‘Defining the Applicable Regulatory Framework’, researchers must clearly state that the object under study is not yet capable of meeting its intended purpose, and that the use of patient data neither directly impacts their care nor interferes with medical decisions. In this context, the clinical study cannot be classified as a clinical investigation (i.e., the MDR does not apply). This preliminary phase of research should be considered favourably by ethics committees (according to the Recital 25 of the AI Act), because it would allow testing new algorithms and methodologies for analysing clinical data without introducing unacceptable risks to patients.12 However, researchers would have to provide adequate information to enable ethics committees to determine whether the general criteria of ethicality and scientific validity are met.

In general, there are three successive design phases that can be recognised when one evaluates the stage of development of an AI system that uses health data from healthy subjects or those with disease. They may be the objective of different clinical studies, with distinct regulatory burdens (Figure 1 ):

  • Phase 1 is training of the AI system.
  • Phase 2 is testing in real-world conditions for the evaluation of performance and conformity (pre-marketing).
  • Phase 3 is monitoring after placing the AI system on the market (post-marketing).

Figure 1: Successive Design Phases of an Artificial Intelligence System (or of a Medical Device Including an Artificial Intelligence System) with Distinct Regulatory Burdens

Article image

Before detailing each phase, it is worth outlining how data are used in supervised ML. In this context, datasets are typically divided into three distinct subsets, each of which plays a specific role in the model development process (Table 3):

  • The training set is used to teach the algorithm how to make predictions. It consists of labelled data (inputs and known outputs) that allow the model to learn the underlying patterns and relationships.
  • The validation set is used during the training phase to fine-tune the model’s hyperparameters and to monitor its performance. Although the model ‘sees’ these data during training, this dataset is not used to update the model weights directly.
  • The test set is a completely unseen dataset used after training and validation is complete. It should provide an unbiased evaluation of the model’s generalisation ability and real-world performance.

Table 3: Overview of the Roles of Dataset Roles Supervised Learning

Article image

Phase 1: Training

The purpose of training the AI system must be clearly identified as the objective of the research protocol, ensuring that, also if its intended purpose qualifies the AI system as a medical device, it cannot yet be met at this stage. The research team must also demonstrate that at this stage AI is used only for research and will not interfere with medical decisions.

If the intended purpose of the AI system does not fall within the definition of a medical device (e.g. when it is used solely for statistical, administrative or organisational tasks, such as patient engagement or trial management), the considerations outlined above do not apply, as the system is automatically classified as not being a medical device.

To be confident that AI training procedures are correctly designed and carried out in this preliminary phase, we propose that regulatory assessment by ethics committees should also be based on the following criteria:

  • The rationale and criteria used for selecting the dataset populations must be clearly described. It is essential to specify how and why a particular population has been chosen for the algorithm’s development and evaluation phases.
  • Whether and how pre-trained systems or tools provided by third parties are used, integrated or modified for the purposes of the project should be mentioned.
  • The characteristics of the input data, how they are divided into training, validation and test sets, as well as the related procedures used should be described.
  • Whether the data are pre-processed (e.g. through normalisation and/or harmonisation procedures33) before being given as input to the algorithm, and which procedures have been used, should be indicated.
  • Whether the output data undergoes post-processing before being displayed as a result to the user should be indicated.
  • Whether the input data are generated synthetically should be specified and, if so, all the necessary information must be provided, along with a justification of the choice for their use.
  • The entire pipeline followed by the data must be explained and discussed in all its parts: data collection, possible pre-processing, dataset division strategies (e.g. cross-validation), training/validation/testing of the algorithm, algorithm output and possible post-processing.
  • The access privileges to the data and to the AI system being trained, including any restrictions, must be described. The computational environment used for training, validation, and testing of the algorithm must be clearly specified. It is important to indicate whether these activities are carried out on local dedicated infrastructures or if they make use of cloud-based platforms or third-party resources.

In Phase 1, ethical considerations must highlight the absence of direct or indirect benefit for the subjects from whom the health data originate (found in the informed consent form). The informed consent form must, in any case, inform the patients that their data will be used to train AI technologies. It should also clarify that, at this development stage, the system will neither provide diagnostic or therapeutic benefits nor pose any risk to the patient. Furthermore, the system will have no impact on the responsibility for clinical decision-making.

Phase 2: Testing in Real-world Conditions

This phase refers to the verification of the system’s performance on a database different from the one used in Phase 1.

The objective of the clinical study is to evaluate the safety and performance (i.e., the ability to achieve its intended purpose) of a specific AI system and therefore the validation of an AI model in real-world conditions. Because at this stage the AI system can fulfil its intended purpose, the first step is to identify the applicable regulatory framework, including whether the MDR applies. To this end, it is necessary to assess the intended purpose of the product (regardless of whether the AI system is itself the product or a component of the product) and verify which legal definitions it meets under the relevant Union legislation.

Regarding a high-risk AI system, Article 8 of the AI Act states: “Where a product contains an AI system, to which the requirements of this Regulation as well as requirements of the Union harmonisation legislation listed in Section A of Annex I apply, providers shall be responsible for ensuring that their product is fully compliant with all applicable requirements under applicable Union harmonisation legislation”.12

This approach reflects the general regulatory approach already in place for all products intended to be placed on the EU market: compliance with all applicable EU harmonisation legislation is required, and the AI Act makes no exception in this regard. When an AI system is integrated into a product or is a product itself, it inherits all relevant obligations arising from existing sectoral regulations, in addition to the specific provisions laid down in the AI Act, where applicable.

In the context of clinical research, the object of a study may or may not qualify as a medical device. If it does, then full compliance with the MDR is mandatory. In this case, in addition to having an intended purpose that meets the definition of a medical device under Article 2 of the MDR, a study must also qualify as a clinical investigation for the MDR to apply. This means that the research objective must fulfil the requirements set out in Articles 62–82 of the MDR.13

It should be noted that the AI Act does not specifically refer to medical devices based on AI; rather, it is limited to generically regulating high-risk AI systems. Despite this generality, EU Regulation 2024/1689 anticipates some indications that represent novelties with respect to EU Regulation 2017/745 Regulation on medical devices, whenever AI systems are used for medical purposes or are included in medical devices for medical purposes. Briefly, with reference to the articles of the AI Act, additional specifications are required. These include:

  • data governance to train AI systems, based on a proper choice of data sets for training, validation and testing, which must be subjected to adequate governance and management procedures (Article 10);
  • specific technical documentation for high-risk AI systems, to be provided before the operative use of the system and that shall be kept up to date (Article 11);
  • record-keeping over the entire lifetime of the system (Article 12);
  • transparency and human oversight during the period in which AI systems are in use, carried out by people who should be able to properly understand the capacities and limitations of the AI system and to decide, in any particular situation, not to use the AI system or interrupt it if necessary for safety reasons (Articles 13 and 14); and
  • accuracy, robustness and cybersecurity, which must be maintained throughout the lifecycle of the AI system (Article 15).

It is clear that further effort is needed to better focus these requirements on the medical field, for example, to better define the skilled personnel who will provide human oversight during AI use in medical care, and whether these experts will need to be trained for such a task.

In this phase, ethics committees should ensure that, as part of the informed consent process, patients are guaranteed the right to be informed about the use of AI technologies, about the advantages in diagnostic and therapeutic terms, about the risks from the use of the technology and about the responsibility of the decision-making process.

Phase 3: Post-marketing Monitoring

This phase refers to the monitoring of the performance of a high-risk AI system when placed on the market. The investigation involves the evaluation of AI systems that have already obtained the CE marking for the intended use investigated in the study protocol. The aim is to evaluate the safety and performance under real-world conditions, thereby gathering evidence to continuously support the declaration of conformity. Moreover, in this phase, where relevant, the AI Act introduces the need to address interactions with other AI systems, including other devices or software.

Conclusion

The integration of AI into clinical research marks a fundamental transformation in the methodologies, regulatory frameworks and ethical paradigms governing biomedical innovation. With the increasing integration of AI systems in medical devices, diagnostics, and decision support applications, the challenges facing key stakeholders, researchers, ethics committees, regulators, developers and patients are becoming increasingly complex and urgent.

The EU has responded to these emerging needs through comprehensive legislative instruments, in particular Regulation (EU) 2024/1689 (the AI Act), which has established a structured and risk-based framework for the governance of AI systems. The classification of medical AI applications as high-risk entails stringent obligations regarding data quality, technical documentation, human oversight, transparency and post-marketing surveillance. The convergence of these provisions with the current MDR (EU) 2017/745 reinforces the multidimensional nature of compliance in AI-based clinical trials.

Crucially, ethics committees play a central and evolving role in this landscape. No longer limited to the assessment of traditional biomedical protocols, these bodies must now tackle algorithmic opacity, computational interpretability and the implications of data-driven automation in clinical environments. Their responsibilities include not only participant protection and informed consent, but also broader considerations such as fairness, accountability and protection of fundamental rights. This requires enhanced interdisciplinary expertise, the integration of technical evaluation criteria and the consistent application of ethical principles across Member States, as well as the promotion of training courses, refresher seminars and topical meetings for ethics committee members and researchers. In fact, Article 4 of the AI Act requires AI literacy that includes providers and deployers of AI systems. More generally, Ethical Guidelines for trustworthy AI asks for the promotion of training and education so that all stakeholders are trained and informed about trustworthy AI as an ethical requirement. In this context, it is also important to emphasise the importance of involving consumers (and patients) in the design and conduct of clinical trials using AI, as explicitly highlighted in Recital 73 of the AI Act. That recital makes it clear that for high-risk AI systems such as those used in clinical research, the design, development and on-going oversight should involve all relevant stakeholders, explicitly naming consumer associations and patient organisations among them. This ensures that clinical protocols and conformity assessments are co-designed with the interests and perspectives of those ultimately affected.

We also suggest that ethics committees adopt internal documents, such as guidelines, explanation notes, and/or checklists, to guide researchers with a clearer definition of the AI system they want to develop and test in the clinical studies they submit for evaluation. This would ensure that the relevant information is available to support transparency and explainability. In this regard, we have proposed a framework structured into three distinct development phases (training, real-world testing and post-marketing monitoring), which offers a pragmatic and scalable model to manage the lifecycle of clinical AI systems. It distinguishes between exploratory algorithm development and regulated validation, ensuring that regulatory burdens adequately match the maturity and clinical impact of the technology. At the same time, it offers criteria that may be useful to ensure that studies on early-stage AI technology have been properly planned. Moreover, as many AI applications in medicine are classified as high-risk systems, the internal document should invite researchers to conduct and submit to the ethical committee an assessment which can help to identify and reduce potential risks to patient fundamental rights, considering the Fundamental Rights Impact Assessment in Article 27 of the AI Act.

In summary, the implementation of AI in healthcare must be driven not only by innovation and utility, but also by an on-going commitment to ethical integrity, transparency and social responsibility. Preserving individual autonomy, protecting vulnerable populations, and ensuring fair representation of diverse demographics in training datasets are not add-ons, but indispensable pillars of trustworthy AI. Finally, a dialogue among all stakeholders – scientific, legal, ethical and technological – is essential and must be maintained continuously because developments in research and the use of AI systems are proceeding at an accelerated pace. Only through such collaborative and thoughtful approaches can AI technologies fulfil their promise to advance medical knowledge and improve patient care, while remaining aligned with fundamental values of human dignity and the public interest.

References

  1. Askin S, Burkhalter D, Calado G, El Dakrouni S. Artificial intelligence applied to clinical trials: opportunities and challenges. Health Technol (Berl) 2023;13:203–13. 
    Crossref | PubMed
  2. Karalis VD. The integration of artificial intelligence into clinical practice. Appl Biosci 2024;3:14–44. 
    Crossref
  3. Scapicchio C, Gabelloni M, Barucci A, et al. A deep look into radiomics. Radiol Med 2021;126:1296–311. 
    Crossref | PubMed
  4. Borgheresi R, Barucci A, Colantonio S, et al. NAVIGATOR: an Italian regional imaging biobank to promote precision medicine for oncologic patients. Eur Radiol Exp 2022;6:53. 
    Crossref | PubMed
  5. Berti A, Carloni G, Colantonio S, et al. Data models for an imaging bio-bank for colorectal, prostate and gastric cancer: the navigator project. Presented at: IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), Piscataway, NJ, 27–30 September 2024.
  6. US Food & Drug Administration. Artificial Intelligence-Enabled Medical Devices. 2025. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-enabled-medical-devices (accessed 12 October 2025).
  7. European Medicines Agency. Reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle. 2024. https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle_en.pdf (accessed 12 October 2025.
  8. Hillis JM, Visser JJ, Cliff ERS, et al. The lucent yet opaque challenge of regulating artificial intelligence in radiology. NPJ Digit Med 2024;7:69. 
    Crossref | PubMed
  9. Khalili M. Against the opacity, and for a qualitative understanding, of artificially intelligent technologies. AI Ethics 2024;4:1013–21. 
    Crossref
  10. Barucci A, Neri E. Adversarial radiomics: the rising of potential risks in medical imaging from adversarial learning. Eur J Nucl Med Mol Imaging 2020;47:2941–3. 
    Crossref | PubMed
  11. Barucci A, Colcelli V, Gottard A. Imaging biobank: what are the areas of the GDPR bearing on an image biobank? In: Colcelli V, Cippitani R, Brochhausen-Delius C, Arnold R, eds. GDPR requirements for biobanking activities across Europe. Cham, Switzerland: Springer, 2023;241–51. 
    Crossref
  12. EUR-Lex. Regulation (EU) 2024/1689 of the European Parliament and of the Council. 2024. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689 (accessed 12 October 2025).
  13. EUR-Lex. Regulation (EU) 2017/745 of the European Parliament and of the Council. 2017. http://data.europa.eu/eli/reg/2017/745/oj (accessed 12 October 2025).
  14. Joint Artificial Intelligence Board and Medical Device Coordination Group. Interplay between the Medical Devices Regulation (MDR) & In Vitro Diagnostic Medical Devices Regulation (IVDR) and the Artificial Intelligence Act (AIA). 2025. https://health.ec.europa.eu/document/download/b78a17d7-e3cd-4943-851d-e02a2f22bbb4_en?filename=mdcg_2025-6_en.pdf (accessed 12 October 2025).
  15. Fernández A. Opacity, machine learning and explainable AI. In: Lara F, Deckers J, eds. Ethics of artificial intelligence. Cham, Germany: Springer, 2023;39–58. 
    Crossref
  16. US Food & Drug Administration. Predetermined change control plans for machine learning-enabled medical devices: guiding principles. https://www.fda.gov/medical-devices/software-medical-device-samd/predetermined-change-control-plans-machine-learning-enabled-medical-devices-guiding-principles (accessed 12 October 2025).
  17. Publications Office of the European Union. Ethical guidelines for trustworthy AI. 2019. https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1 (accessed 12 October 2025).
  18. EUR-Lex. Regulation (EU) 2024/1689 of the European Parliament and of the Council https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689 (accessed 12 October 2025).
  19. Conference of INGOs of the Council of Europe. Preamble of the Charter Of Fundamental Rights Of The European Union. 2008. https://rm.coe.int/16802f5eb7 (accessed 12 October 2025).
  20. Colcelli V, Burzagli L. Elements for a European culture of AI tool development: the white paper on artificial intelligence and ethical guidelines for trustworthy AI. Rev Justicia Derecho 2021;4:1–12 [in Spanish]. 
    Crossref
  21. Cornejo- Plaza I, Cippitani R. Ethical and legal considerations of Artificial Intelligence in Higher Education: challenges and perspectives [in Spanish]. Rev Educ Derecho 2023;28. 
    Crossref
  22. EUR-Lex. Regulation (EU) 2024/1689 of the European Parliament and of the Council of Europe. 2024. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689 (accessed 12 October 2025).
  23. European Commission. Explanatory memorandum of the proposal for an AI regulation. 2021. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206 (accessed 12 October 2025).
  24. Conference of INGOs of the Council of Europe. The Charter Of Fundamental Rights Of The European Union. https://rm.coe.int/16802f5eb7 (accessed 12 October 2025).
  25. EUR-Lex. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: building trust in human-centric artificial intelligence. 2019. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52019DC0168 (accessed 12 October 2025).
  26. European Parliament. Report on artificial intelligence in education, culture and the audiovisual sector. 2021. https://www.europarl.europa.eu/doceo/document/A-9-2021-0127_EN.html (accessed 12 October 2025).
  27. European Parliament. European framework on ethical aspects of artificial intelligence, robotics and related technologies. 2020. https://www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2020)654179 (accessed 12 October 2025).
  28. National Development and Reform Commission of the People’s Republic of China. Interim Measures for the Management of Generative Artificial Intelligence Services. Promulged on July 13, 2023 (translated from Chinese). https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm (accessed 28 November 2025).
  29. Ministry of Science and ICT of Korea: Press Releases. MSIT Announces Legislative Notice for the Enforcement Decree of the AI Basic Act to Foster the AI Industry and Build a Foundation for Safety and Trust. Issued on 13 November 2025. https://www.msit.go.kr/eng/bbs/view.do?sCode=eng&mPid=2&mId=4&bbsSeqNo=42&nttSeqNo=1191 (accessed 28 November 2025).
  30. European Commission. The general-purpose AI code of practice. 2025. https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai (accessed 12 October 2025).
  31. EUR-Lex. Communication from the Commission to The European Parliament, the European Council, the Council, the European Economic and Social Committee and The Committee of the Regions: artificial intelligence for Europe. 2018. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018DC0237 (accessed 12 October 2025).
  32. European Parliament. European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies. 2020. https://www.europarl.europa.eu/doceo/document/TA-9-2020-0275_EN.html (accessed 12 October 2025).
  33. Marzi C, Giannelli M, Barucci A, et al. Efficacy of MRI data harmonization in the age of machine learning: a multicenter study across 36 datasets. Sci Data 2024;11:115. 
    Crossref | PubMed