• U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Nih clinical research trials and you.

The NIH Clinical Trials and You website is a resource for people who want to learn more about clinical trials. By expanding the below questions, you can read answers to common questions about taking part in a clinical trial. 

What are clinical trials and why do people participate?

Clinical research is medical research that involves people like you. When you volunteer to take part in clinical research, you help doctors and researchers learn more about disease and improve health care for people in the future. Clinical research includes all research that involves people.  Types of clinical research include:

A potential volunteer talks with her doctor about participating in a clinical trial.

  • Epidemiology, which improves the understanding of a disease by studying patterns, causes, and effects of health and disease in specific groups.
  • Behavioral, which improves the understanding of human behavior and how it relates to health and disease.
  • Health services, which looks at how people access health care providers and health care services, how much care costs, and what happens to patients as a result of this care.
  • Clinical trials, which evaluate the effects of an intervention on health outcomes.

What are clinical trials and why would I want to take part?

Clinical trials are part of clinical research and at the heart of all medical advances. Clinical trials look at new ways to prevent, detect, or treat disease. Clinical trials can study:

  • New drugs or new combinations of drugs
  • New ways of doing surgery
  • New medical devices
  • New ways to use existing treatments
  • New ways to change behaviors to improve health
  • New ways to improve the quality of life for people with acute or chronic illnesses.

The goal of clinical trials is to determine if these treatment, prevention, and behavior approaches are safe and effective. People take part in clinical trials for many reasons. Healthy volunteers say they take part to help others and to contribute to moving science forward. People with an illness or disease also take part to help others, but also to possibly receive the newest treatment and to have added (or extra) care and attention from the clinical trial staff. Clinical trials offer hope for many people and a chance to help researchers find better treatments for others in the future

Why is diversity and inclusion important in clinical trials?

People may experience the same disease differently. It’s essential that clinical trials include people with a variety of lived experiences and living conditions, as well as characteristics like race and ethnicity, age, sex, and sexual orientation, so that all communities benefit from scientific advances.

See Diversity & Inclusion in Clinical Trials for more information.

How does the research process work?

The idea for a clinical trial often starts in the lab. After researchers test new treatments or procedures in the lab and in animals, the most promising treatments are moved into clinical trials. As new treatments move through a series of steps called phases, more information is gained about the treatment, its risks, and its effectiveness.

What are clinical trial protocols?

Clinical trials follow a plan known as a protocol. The protocol is carefully designed to balance the potential benefits and risks to participants, and answer specific research questions. A protocol describes the following:

  • The goal of the study
  • Who is eligible to take part in the trial
  • Protections against risks to participants
  • Details about tests, procedures, and treatments
  • How long the trial is expected to last
  • What information will be gathered

A clinical trial is led by a principal investigator (PI). Members of the research team regularly monitor the participants’ health to determine the study’s safety and effectiveness.

What is an Institutional Review Board?

Most, but not all, clinical trials in the United States are approved and monitored by an Institutional Review Board (IRB) to ensure that the risks are reduced and are outweighed by potential benefits. IRBs are committees that are responsible for reviewing research in order to protect the rights and safety of people who take part in research, both before the research starts and as it proceeds. You should ask the sponsor or research coordinator whether the research you are thinking about joining was reviewed by an IRB.

What is a clinical trial sponsor?

Clinical trial sponsors may be people, institutions, companies, government agencies, or other organizations that are responsible for initiating, managing or financing the clinical trial, but do not conduct the research.

What is informed consent?

Informed consent is the process of providing you with key information about a research study before you decide whether to accept the offer to take part. The process of informed consent continues throughout the study. To help you decide whether to take part, members of the research team explain the details of the study. If you do not understand English, a translator or interpreter may be provided. The research team provides an informed consent document that includes details about the study, such as its purpose, how long it’s expected to last, tests or procedures that will be done as part of the research, and who to contact for further information. The informed consent document also explains risks and potential benefits. You can then decide whether to sign the document. Taking part in a clinical trial is voluntary and you can leave the study at any time.

What are the types of clinical trials?

There are different types of clinical trials.

Why do researchers do different kinds of clinical studies?

  • Prevention trials look for better ways to prevent a disease in people who have never had the disease or to prevent the disease from returning. Approaches may include medicines, vaccines, or lifestyle changes.
  • Screening trials test new ways for detecting diseases or health conditions.
  • Diagnostic trials study or compare tests or procedures for diagnosing a particular disease or condition.
  • Treatment trials test new treatments, new combinations of drugs, or new approaches to surgery or radiation therapy.
  • Behavioral trials evaluate or compare ways to promote behavioral changes designed to improve health.
  • Quality of life trials (or supportive care trials) explore and measure ways to improve the comfort and quality of life of people with conditions or illnesses.

What are the phases of clinical trials?

Clinical trials are conducted in a series of steps called “phases.” Each phase has a different purpose and helps researchers answer different questions.

  • Phase I trials : Researchers test a drug or treatment in a small group of people (20–80) for the first time. The purpose is to study the drug or treatment to learn about safety and identify side effects.
  • Phase II trials : The new drug or treatment is given to a larger group of people (100–300) to determine its effectiveness and to further study its safety.
  • Phase III trials : The new drug or treatment is given to large groups of people (1,000–3,000) to confirm its effectiveness, monitor side effects, compare it with standard or similar treatments, and collect information that will allow the new drug or treatment to be used safely.
  • Phase IV trials : After a drug is approved by the FDA and made available to the public, researchers track its safety in the general population, seeking more information about a drug or treatment’s benefits, and optimal use.

What do the terms placebo, randomization, and blinded mean in clinical trials?

In clinical trials that compare a new product or therapy with another that already exists, researchers try to determine if the new one is as good, or better than, the existing one. In some studies, you may be assigned to receive a placebo (an inactive product that resembles the test product, but without its treatment value).

Comparing a new product with a placebo can be the fastest and most reliable way to show the new product’s effectiveness. However, placebos are not used if you would be put at risk — particularly in the study of treatments for serious illnesses — by not having effective therapy. You will be told if placebos are used in the study before entering a trial.

Randomization is the process by which treatments are assigned to participants by chance rather than by choice. This is done to avoid any bias in assigning volunteers to get one treatment or another. The effects of each treatment are compared at specific points during a trial. If one treatment is found superior, the trial is stopped so that the most volunteers receive the more beneficial treatment.  This video helps explain randomization for all clinical trials .

" Blinded " (or " masked ") studies are designed to prevent members of the research team and study participants from influencing the results. Blinding allows the collection of scientifically accurate data. In single-blind (" single-masked ") studies, you are not told what is being given, but the research team knows. In a double-blind study, neither you nor the research team are told what you are given; only the pharmacist knows. Members of the research team are not told which participants are receiving which treatment, in order to reduce bias. If medically necessary, however, it is always possible to find out which treatment you are receiving.

Who takes part in clinical trials?

Many different types of people take part in clinical trials. Some are healthy, while others may have illnesses. Research procedures with healthy volunteers are designed to develop new knowledge, not to provide direct benefit to those taking part. Healthy volunteers have always played an important role in research.

Healthy volunteers are needed for several reasons. When developing a new technique, such as a blood test or imaging device, healthy volunteers help define the limits of "normal." These volunteers are the baseline against which patient groups are compared and are often matched to patients on factors such as age, gender, or family relationship. They receive the same tests, procedures, or drugs the patient group receives. Researchers learn about the disease process by comparing the patient group to the healthy volunteers.

Factors like how much of your time is needed, discomfort you may feel, or risk involved depends on the trial. While some require minimal amounts of time and effort, other studies may require a major commitment of your time and effort, and may involve some discomfort. The research procedure(s) may also carry some risk. The informed consent process for healthy volunteers includes a detailed discussion of the study's procedures and tests and their risks.

A patient volunteer has a known health problem and takes part in research to better understand, diagnose, or treat that disease or condition. Research with a patient volunteer helps develop new knowledge. Depending on the stage of knowledge about the disease or condition, these procedures may or may not benefit the study participants.

Patients may volunteer for studies similar to those in which healthy volunteers take part. These studies involve drugs, devices, or treatments designed to prevent,or treat disease. Although these studies may provide direct benefit to patient volunteers, the main aim is to prove, by scientific means, the effects and limitations of the experimental treatment. Therefore, some patient groups may serve as a baseline for comparison by not taking the test drug, or by receiving test doses of the drug large enough only to show that it is present, but not at a level that can treat the condition.

Researchers follow clinical trials guidelines when deciding who can participate, in a study. These guidelines are called Inclusion/Exclusion Criteria . Factors that allow you to take part in a clinical trial are called "inclusion criteria." Those that exclude or prevent participation are "exclusion criteria." These criteria are based on factors such as age, gender, the type and stage of a disease, treatment history, and other medical conditions. Before joining a clinical trial, you must provide information that allows the research team to determine whether or not you can take part in the study safely. Some research studies seek participants with illnesses or conditions to be studied in the clinical trial, while others need healthy volunteers. Inclusion and exclusion criteria are not used to reject people personally. Instead, the criteria are used to identify appropriate participants and keep them safe, and to help ensure that researchers can find new information they need.

What do I need to know if I am thinking about taking part in a clinical trial?

Head-and-shoulders shot of a woman looking into the camera.

Risks and potential benefits

Clinical trials may involve risk, as can routine medical care and the activities of daily living. When weighing the risks of research, you can think about these important factors:

  • The possible harms that could result from taking part in the study
  • The level of harm
  • The chance of any harm occurring

Most clinical trials pose the risk of minor discomfort, which lasts only a short time. However, some study participants experience complications that require medical attention. In rare cases, participants have been seriously injured or have died of complications resulting from their participation in trials of experimental treatments. The specific risks associated with a research protocol are described in detail in the informed consent document, which participants are asked to consider and sign before participating in research. Also, a member of the research team will explain the study and answer any questions about the study. Before deciding to participate, carefully consider risks and possible benefits.

Potential benefits

Well-designed and well-executed clinical trials provide the best approach for you to:

  • Help others by contributing to knowledge about new treatments or procedures.
  • Gain access to new research treatments before they are widely available.
  • Receive regular and careful medical attention from a research team that includes doctors and other health professionals.

Risks to taking part in clinical trials include the following:

  • There may be unpleasant, serious, or even life-threatening effects of experimental treatment.
  • The study may require more time and attention than standard treatment would, including visits to the study site, more blood tests, more procedures, hospital stays, or complex dosage schedules.

What questions should I ask if offered a clinical trial?

If you are thinking about taking part in a clinical trial, you should feel free to ask any questions or bring up any issues concerning the trial at any time. The following suggestions may give you some ideas as you think about your own questions.

  • What is the purpose of the study?
  • Why do researchers think the approach may be effective?
  • Who will fund the study?
  • Who has reviewed and approved the study?
  • How are study results and safety of participants being monitored?
  • How long will the study last?
  • What will my responsibilities be if I take part?
  • Who will tell me about the results of the study and how will I be informed?

Risks and possible benefits

  • What are my possible short-term benefits?
  • What are my possible long-term benefits?
  • What are my short-term risks, and side effects?
  • What are my long-term risks?
  • What other options are available?
  • How do the risks and possible benefits of this trial compare with those options?

Participation and care

  • What kinds of therapies, procedures and/or tests will I have during the trial?
  • Will they hurt, and if so, for how long?
  • How do the tests in the study compare with those I would have outside of the trial?
  • Will I be able to take my regular medications while taking part in the clinical trial?
  • Where will I have my medical care?
  • Who will be in charge of my care?

Personal issues

  • How could being in this study affect my daily life?
  • Can I talk to other people in the study?

Cost issues

  • Will I have to pay for any part of the trial such as tests or the study drug?
  • If so, what will the charges likely be?
  • What is my health insurance likely to cover?
  • Who can help answer any questions from my insurance company or health plan?
  • Will there be any travel or child care costs that I need to consider while I am in the trial?

Tips for asking your doctor about trials

  • Consider taking a family member or friend along for support and for help in asking questions or recording answers.
  • Plan what to ask — but don't hesitate to ask any new questions.
  • Write down questions in advance to remember them all.
  • Write down the answers so that they’re available when needed.
  • Ask about bringing a tape recorder to make a taped record of what's said (even if you write down answers).

This information courtesy of Cancer.gov.

How is my safety protected?

A retired couple smiling for the camera.

Ethical guidelines

The goal of clinical research is to develop knowledge that improves human health or increases understanding of human biology. People who take part in clinical research make it possible for this to occur. The path to finding out if a new drug is safe or effective is to test it on patients in clinical trials. The purpose of ethical guidelines is both to protect patients and healthy volunteers, and to preserve the integrity of the science.

Informed consent

Informed consent is the process of learning the key facts about a clinical trial before deciding whether to participate. The process of providing information to participants continues throughout the study. To help you decide whether to take part, members of the research team explain the study. The research team provides an informed consent document, which includes such details about the study as its purpose, duration, required procedures, and who to contact for various purposes. The informed consent document also explains risks and potential benefits.

If you decide to enroll in the trial, you will need to sign the informed consent document. You are free to withdraw from the study at any time.

Most, but not all, clinical trials in the United States are approved and monitored by an Institutional Review Board (IRB) to ensure that the risks are minimal when compared with potential benefits. An IRB is an independent committee that consists of physicians, statisticians, and members of the community who ensure that clinical trials are ethical and that the rights of participants are protected. You should ask the sponsor or research coordinator whether the research you are considering participating in was reviewed by an IRB.

Further reading

For more information about research protections, see:

  • Office of Human Research Protection
  • Children's Assent to Clinical Trial Participation

For more information on participants’ privacy and confidentiality, see:

  • HIPAA Privacy Rule
  • The Food and Drug Administration, FDA’s Drug Review Process: Ensuring Drugs Are Safe and Effective

For more information about research protections, see: About Research Participation

What happens after a clinical trial is completed?

After a clinical trial is completed, the researchers carefully examine information collected during the study before making decisions about the meaning of the findings and about the need for further testing. After a phase I or II trial, the researchers decide whether to move on to the next phase or to stop testing the treatment or procedure because it was unsafe or not effective. When a phase III trial is completed, the researchers examine the information and decide whether the results have medical importance.

Results from clinical trials are often published in peer-reviewed scientific journals. Peer review is a process by which experts review the report before it is published to ensure that the analysis and conclusions are sound. If the results are particularly important, they may be featured in the news, and discussed at scientific meetings and by patient advocacy groups before or after they are published in a scientific journal. Once a new approach has been proven safe and effective in a clinical trial, it may become a new standard of medical practice.

Ask the research team members if the study results have been or will be published. Published study results are also available by searching for the study's official name or Protocol ID number in the National Library of Medicine's PubMed® database .

How does clinical research make a difference to me and my family?

A happy family of four. The two children are piggy-backing on their parents.

Only through clinical research can we gain insights and answers about the safety and effectiveness of treatments and procedures. Groundbreaking scientific advances in the present and the past were possible only because of participation of volunteers, both healthy and those with an illness, in clinical research. Clinical research requires complex and rigorous testing in collaboration with communities that are affected by the disease. As research opens new doors to finding ways to diagnose, prevent, treat, or cure disease and disability, clinical trial participation is essential to help us find the answers.

This page last reviewed on October 3, 2022

Connect with Us

  • More Social Media from NIH

Your session is about to expire

Site initiation visit (siv): clinical trial basics, what is an siv in clinical research, siv definition: site initiation visit.

An SIV (clinical trial site initiation visit) is a preliminary inspection of the trial site by the sponsor before the enrollment and screening process begins at that site. It is generally conducted by a monitor or clinical research associate (CRA), who reviews all aspects of the trial with the site staff, including going through protocol documents and conducting any necessary staff training.[ 1 ],[ 2 ]

Also known as a study start-up visit, the sponsor can only request an SIV after the site has been selected and formal agreements such as the CTA have been signed.

What is the purpose of an SIV?

Clinical trial SIVs are necessary to ensure that all personnel of a given site who will be involved in the clinical trial, such as investigators and study staff, thoroughly understand the trial protocol and are trained appropriately so as to handle their role and responsibilities. Furthermore, the site initiation visit has the aim of ensuring the trial site is operationally ready, with working infrastructure, tools, and any study materials needed.[ 1 ]

Given the scope of the SIV, clinical trial sponsors should schedule this visit well before enrollment so that there is plenty of time to comprehensively inspect all relevant processes, and to conduct further training or rectify any issues, if necessary.

Can the SIV be conducted before IRB approval?

IRB approval is generally necessary before the SIV is carried out. Clinical trial sponsors should select sites that are compliant with all applicable regulatory requirements, and after the site receives IRB approval for the research, the sponsor can conduct the SIV.

SIV checklist for thorough site initiation visits

Given the importance of the SIV, clinical trial sponsors should make the most of this inspection visit by coming fully prepared with a detailed checklist of what is to be confirmed during the SIV.

Clinical trial sites might receive a copy of this checklist so they can ensure that all relevant staff are present for the visit. Specific tasks to include in the SIV checklist include the following:[ 1 ],[ 2 ],[ 3 ],[ 4 ]

  • Discussing the clinical trial’s objectives with study staff
  • Educating the research team on Good Clinical Practices
  • Reviewing the operation schedule for the protocol
  • Discussing the enrollment and screening process, including clarifying the inclusion and exclusion criteria
  • Reviewing the informed consent documents and procedure
  • Clarifying procedures for storing, dispensing, and managing the investigational product (IP)
  • Checking inventory for all required medical supplies and equipment
  • Ensuring secure access to all digital platforms, i.e., correct usernames and passwords
  • Touring the clinical trial site to ensure facilities are in proper condition
  • Reviewing and discussing all clinical trial documentation, such as forms, surveys, SOPs, etc.
  • Reviewing the data management system and any other technological solutions forming part of the site’s or sponsor’s workflow
  • Ensuring that site staff are up to date on training and understand how to maintain essential documentation
  • Reviewing the site/trial budget financial protocols, including any processes related to compensating trial participants
  • Verifying and testing reporting procedures possible adverse events
  • Leaving room for an open discussion of any specific concerns that trial staff may have

This checklist provides basic guidelines only, and should be built upon and customized for each individual study according to risk areas and specific protocols.

Other Trials to Consider

Patient Care

Family-based mental health navigation

Auditory alone brief relaxation, active portable air cleaner (pac), virtual poc toxicology test, intraoperative incisional wound irrigation with povidone-iodine solution, traditional, social needs navigator program, popular categories.

Colon Cancer Clinical Trials 2024

Colon Cancer Clinical Trials 2024

Cannabis Clinical Trials 2024

Cannabis Clinical Trials 2024

Prostate Clinical Trials 2024

Prostate Clinical Trials 2024

Zika Virus Clinical Trials 2024

Zika Virus Clinical Trials 2024

Paid Clinical Trials in Milwaukee, WI

Paid Clinical Trials in Milwaukee, WI

Ofev Clinical Trials

Ofev Clinical Trials

Smoking Cessation Clinical Trials 2024

Smoking Cessation Clinical Trials 2024

Scleroderma Clinical Trials

Scleroderma Clinical Trials

Semantic Dementia Clinical Trials 2023

Semantic Dementia Clinical Trials 2023

Craniopharyngioma Clinical Trials 2023

Craniopharyngioma Clinical Trials 2023

Popular guides.

Clinical Trial Basics: Site Initiation Visit (SIV)

English Editing Research Services

visit types in clinical trials

Types of Clinical Studies (and Clinical Trials)

Types of clinical studies and clinical trials – Edanz Learning Lab

Clinical studies are medical research done on human volunteers. Data are collected from these studies and used to provide new medical knowledge, typically in the form of a published article.

Researchers do different types of clinical studies because they each have different research questions. Read on to learn about different clinical studies, which will help you better understand the articles you read and the studies you’re considering.

What you’ll learn in this post

• What clinical studies and clinical trials are (and how they’re different).

• The most common types of clinical studies.

• Many examples of different types of clinical studies to illustrate the differences and what these different studies find.

• Where you can get expert help with your clinical study.

What is a clinical study?

“Clinical study” is a general term for scientific research studies with human volunteers. Clinical studies can include both interventional (doing or giving something to the volunteers) and non-interventional (nothing is done to the volunteers) studies.

The terms “clinical trial” and “clinical study” are commonly confused or used as synonyms, but they are different in some fundamental ways. Understanding the difference is important because the official rules that govern them are different.

According to the National Institutes of Health , a clinical trial (explained below) involves giving something to volunteers (an intervention). Then, the volunteers are watched over time. Clinical studies, however, don’t automatically involve interventions. Therefore, while clinical trials are clinical studies, not all clinical studies are clinical trials.

Before a local regulatory authority (such as the U.S. Food and Drug Administration) can approve the start of any clinical study, researchers must show that their research protocols meet the regulations. This is to ensure volunteers’ well-being. The study can start once these regulations have been formally met.

There are many types (and subtypes) of clinical studies. The following are those you’re most likely to come across.

The two major categories of clinical studies

Study design is vital in the quality, execution, and interpretation of clinical studies. Different research questions require different methods to answer them. Interventional and observational studies are the two primary umbrellas for these different methods.

Interventional studies

Interventional studies, which are, in fact, clinical trials, are designed in a way that requires investigators to give or do something as part of the study design. In this study , the researchers gave male dialysis patients testosterone to see if it improved their quality of life.

Observational studies

In observational studies, the investigator doesn’t give volunteers anything as part of the study design. However, volunteers may receive interventions as part of their routine medical care. Investigators collect data to assess health outcomes in volunteers as part of a protocol. Therefore, observational studies still require ethical clearance from the relevant local ethics board.

In this study among people with diabetes, researchers wanted to see if sex hormones and vascular complication are related. Nothing was given to the volunteers before the researchers measured the sex hormone levels.

What types of clinical trials are there?

Clinical trials have different complexity and design depending on the researchers’ specific aim.

Randomized controlled trials

In a randomized controlled trial (RCT), volunteers are randomly assigned either into a control group or an intervention group. The control group receives no intervention or a similar intervention that doesn’t actually do anything (e.g., placebo or sham procedure). An RCT can also compare at least two treatments.

RCTs are the gold standard for determining if cause-effect relationships exist between the intervention and the outcome of interest. This is because randomization ensures the only difference between the groups is the intervention received. The difference in outcomes is, therefore, the intervention’s effect. This puts them high up on the evidence pyramid .

In a trial involving 50 postmenopausal women with metabolic syndrome , investigators wanted to see how changes in insulin resistance, lipid profiles, and inflammation differed between women taking either oral or transdermal estradiol. The women were randomized into the oral or transdermal estradiol group. Oral estradiol worsened insulin resistance and inflammation. Meanwhile, transdermal estradiol had little effect on insulin resistance and reduced inflammation.

Types of clinical studies and clinical trials – Edanz Learning Lab

Multi-arm multi-stage (MAMS) trials

MAMS trials were created to accelerate the drug development process. MAMS trials typically have several groups:

• A fixed-control group (this group doesn’t change throughout the trial) • Several treatment groups (this group can change throughout the trial)

As a MAMS trial goes on, researchers may find that some treatments are not as effective as they thought. These groups can be changed or even closed to more recruitment to focus the patients on more-effective drugs. New treatment groups or subgroups (called “arms”) can also be added.

MAMS trials aim to answer multiple questions simultaneously without planning another clinical trial to assess new treatments, thereby saving time and accelerating the drug development process.

The rEECur trial for a rare pediatric cancer is an example of a MAMS trial. This trial aims to find the optimal treatment for returning/non-responsive Ewing sarcoma by comparing four commonly used chemotherapy regimens. During each of the two pre-planned interim analyses (data analyses that happen before data collection is completed), the two least-promising regimens will be dropped. Then, the two most promising regimens will progress further along drug development stages (called “phases”).

Pilot studies and feasibility studies

Pilot and feasibility studies are smaller studies conducted before a larger clinical trial takes place. These studies are similar, but they serve different purposes.

Pilot studies

Pilot studies are small, early-stage studies that help in the planning and adjusting of a larger clinical trial.

These studies are conducted before the main study to analyze the study design’s validity and help answer some research questions. Results from pilot studies are sometimes reported in the results of the larger clinical trial.

In this pilot study , researchers wanted to find relevant kidney biomarkers to diagnose chronic kidney disease (CKD) of unknown cause. They grouped volunteers into five groups, each with different CKD causes. From eight kidney biomarkers, a three-marker panel of kidney biomarkers was the best-performing combination for differentiating between CKD causes. This three-marker panel could then potentially be tested later in a larger study.

Feasibility studies

Feasibility studies are conducted when researchers ask, “Is our main clinical trial possible?” Researchers then perform feasibility studies, which are smaller studies that assess the usefulness of doing the main clinical trial. A feasibility study can assess timelines, targets, and costs of the proposed (main) clinical trial, or identify potential intervention adjustments.

Danish researcher Engelbrecht Buur and colleagues wanted to design a shared decision-making intervention. They also want to see whether patients with kidney failure, their relatives, and health professionals could accept it. So, they did a feasibility study to look for evidence to answer their questions. The answer would be to initiate patient involvement in palliative care planning with nephrologists.

Prevention trials

Prevention trials help find out whether an intervention would be able to prevent an outcome of interest from happening among volunteers without the outcome of interest. Prevention trials recruit many healthy volunteers, offer them therapy, then follow them for some time to see whether the outcome of interest happens.

There are two types of prevention trials: action studies and agent studies.

Action studies

These studies ask whether actions people take (e.g., exercise or dietary changes) can prevent an outcome of interest from happening. This clinical trial showed that lifestyle changes are more effective than metformin in preventing diabetes.

Agent studies

These studies ask whether taking something (e.g., a drug or vitamin) may lower the risk of an outcome of interest. Aspirin has been extensively studied in randomized primary prevention trials, such as this one in NEJM . Low-dose aspirin lowered the risk of stroke but not the risk of heart attacks or death due to cardiovascular reasons.

Screening trials

Screening trials test new ways of detecting health conditions in asymptomatic people. Screening tests can include:

  • Imaging tests that create pictures of inside the body
  • Laboratory tests that check body fluids and tissues
  • Genetic tests that look for disease-associated genetic markers

An example of screening tests that have become standard medical practice is pap-smears for cervical cancer .

Treatment trials

Treatment trials occur in phases. Early phases check the safety and tolerability of new treatments. Later phases aim to see if a new treatment works better than the current treatment or a placebo.

In a large clinical trial among people with early-stage HER-negative breast cancer with an inherited BRCA mutation and those with recurrence risk, adding olaparib after surgery and chemotherapy were linked with significantly longer cancer-free survival.

visit types in clinical trials

What types of observational studies are there?

Cohort studies.

A cohort is a group of people who share common characteristics (e.g., people with diabetes, community-dwelling people, people of a specific age range, smokers). Researchers use various methods to recruit volunteers. For instance, they could contact people from a particular region or birth register.

There are two kinds of cohort studies: prospective and retrospective.

Prospective cohort studies

Prospective cohort studies follow volunteers over time to assess the development of the research outcome of interest among volunteers who have the exposure.

For example, in this study using data from 623 men undergoing hemodialysis , researchers wanted to determine whether associations exist between low testosterone, death, and quality of life (outcomes). They assessed testosterone levels at the start of the study. Then, using clinically relevant cutoffs, they defined low testosterone to find volunteers who had the exposure. As the volunteers were followed for 20 months, they had information on death and quality of life.

Retrospective cohort studies

Retrospective cohort studies look back in time for exposure information among volunteers after an outcome has occurred. Baseline exposure information has been assessed in the past and can be retrieved from health records.

In a study using data from 117 COVID-19 patients with severe and critical outcomes (SCO), the researchers wanted to determine the role of diabetes in COVID-19 patient outcomes. They reviewed electronic medical records to obtain information on diabetes status and clinical features. Patients with diabetes were more likely to progress to SCO. Also, older patients were more likely to have SCO. While medication usage was not linked with SCO, renin-angiotensin inhibitor usage was linked with a significantly lower risk of acute cardiac injury.

Case-control studies

Case-control studies determine the association between an exposure and an outcome (e.g., an outcome of interest or a disease).

Case-control studies are usually retrospective, not because the researchers use previously collected data. Instead, the researchers first identify volunteers who have the outcome of interest (cases) and those who don’t (controls). Then, they look back in time to see who had the exposure in each group and compare the exposure frequency in each group (cases vs. controls).

Among the advantages of case-control studies:

  • Cost-effective
  • Rare diseases and outcomes can be studied

One case-control study found factors related to COVID-19 infections among healthcare workers in Colombia. The researchers randomly phone-interviewed healthcare workers. They found 110  workers with COVID-19 (cases) and 113 without COVID-19 (control). They saw that being a man, a nurse, and not using protective equipment were risk factors. Meanwhile, being a student, feeling scared, and using suitable protective equipment were protective.

Sometimes you need a bit of help with your study

That’s why Edanz offers a range of expert-led author-guidance services, from finding research ideas , to study design and preparation , to writing your manuscript , to editing your manuscript for publication. We can also do an expert scientific review of your study. Get in touch to see how we can help you.

Screening and Preparing for a Study Visit

There are several steps that must be taken prior to conducing a study visit..

Many human subject protocols require participants to be scheduled for specific research visits. The number and length of time required for a research appointment to occur is highly dependent on the specifics of the study design. The specifics of the appointment are highly dependent on the location of the appointment. Appointments can occur in inpatient units, outpatient clinics, lab draw locations, diagnostic testing locations, research labs, Clinical Research Center (CRC), at the participant’s home or a neutral site convenient for the participant. The visits where the study will occur must be listed with the IRB.

It is important to set the expectations of each visit for the participant. For example, where the visit will take place, how long the visit will take, tasks to be completed at the visit, what tasks are clinical vs. what is research related, special instructions the participant must follow prior to the visit, directions and parking information, whether or not compensation for time or parking will take place and who will be involved in the visit. This should all be included in a confirmation letter.  

Access Screening Letter

Access Confirmation Letter

Access Confirmation Letter 2

Once the visit is scheduled, follow-up contact within a week prior to the visit by a phone call to the participant. It is essential to reinforce the requirements of the study and ensure there has not been any changes since the last time you spoke (broke a leg, new infection, changed medication). Some of these changes might make the patient ineligible and would be better to catch ahead of time.  

It is important to be understanding of the participant’s personal schedule and to help identify research visit times that are convenient for the individual to commit to without sacrificing protocol compliance.  If the study lasts a while and you know the study visit windows, it is good to provide it to the subject to help them plan ahead of when you might want them to come back for visits.  

Access Schedule Template

Access Subject Schedule by Enrollment

Prior to each study visit, the research team must be prepared for all known and unknown tasks that may need to be completed per protocol. If applicable, physician orders need to be completed and authorized for lab draws, study medication and additional testing; research lab kits should be prepared and available to the appropriate clinical team drawing the samples; participant questionnaires should be prepared; flowsheets required for research documentation should be made available to the appropriate team members; and any end of study visit items should be readily available if the participant decides to withdraw from the study or is removed from the study due to adverse events or investigator discretion, etc.

Access Checklist Screening

Access Checklist Follow-up

Obtaining a Medical Record Number

A medical record number (MRN) needs to be assigned to a research participant if they will be admitted to the hospital in the outpatient or inpatient setting or if they will undergo any medical tests that need to be processed by a hospital lab. When first scheduling the participant, research staff can check whether a medical record number already exists for the individual by checking IHIS. If they have a medical record number, that number will be used to identify them for any hospital-related admissions or tests. If they do not have a medical record number you are able to create a new patient in IHIS. Talk with your clinical research manager about how your department wants the new MRN to be created. 

Clinic Visits

Many studies at Ohio State recruit research participants from patients that are already scheduled for healthcare visits in the medical center or cancer center. If the study design is such that these visits can serve the dual purpose of study visit and doctor visits, then scheduling is relatively straightforward and involves coordinating with the clinical treatment team. Clinic and diagnostic testing appointments are scheduled in IHIS, the provider scheduling database. The research team can review the electronic medical record, for specific information on the visits scheduled. It is important to review the participant visits often, as they could be altered or canceled by someone else which may result in protocol compliance concerns or impact the anticipated activities for that day.  

For research appointments that need to be scheduled in the Ohio State hospital or clinics, independent of the patient’s medical appointments, most of the scheduling is done by the hospital schedulers or you might have a point person to ensure you have the correct staffing needed for the research visit. However, it is still the responsibility of the research staff to communicate the specifics of the protocol visit, including the timing of the visits scheduled, how long visits are expected to take, complete any paperwork necessary, obtain physician order for lab or special testing that needs to be performed at a given visit.

Research Appointments

Many studies require that research visits take place in a location that is specifically equipped to carry out the research protocol. Space may be designed to administer specific computer questionnaires, to conduct interviews in a private setting, to be proximal to labs for obtaining and processing blood or other biological specimen or to medical equipment like research CT scans or MRI equipment that cannot be moved. Sometimes certain controlled environment or conditional experiences are part of the research visit. These visits will be arranged with the key research staff involved in the research visit and may not be formally scheduled in the medical center scheduling system. This can cause patients to get reminder calls about a research procedure that happens later in the day and it is separate from the time/location you had discussed for the consent procedure visit to take place. It is essential to send a research confirmation letter that lists the procedures to help clarify to the who, what when and where for the entire study visit.

Scheduling Off-Site Research Visits

There are research studies that allow the visits to take place in the participant’s home or another neutral location, more convenient for the individual. Off-site study visits pose some additional challenges to the research staff in the form of feasibility and safety.

Before scheduling a home visit, the research staff must assess if the visit tasks can be accomplished in the specific location. For example, is there a workspace sufficient to carry out lab draws and physical examinations, are there electrical outlets for medical equipment or laptop computers, etc. 

The distance away from the study center needs to be taken into consideration as many studies have a travel limit for outreach research staff. Distance can also cause some feasibility issues such as timeframe that blood specimens must be processed to maintain the integrity of the sample. Other things to consider include how the data will be transported to and from the home or off-site study visit. If laptops are used to transport data, they need to be encrypted to ensure HIPAA compliance.

Safety is also a necessary consideration when conducting home visits. When a member of the research team goes to an unfamiliar area to conduct a study visit, at least one other coworker should know that the appointment is occurring. The researcher should communicate with the research team immediately before and after the study visit occurs. The safety of the research staff should always be the primary consideration over the completion of a study visit outside the research study center and appropriate judgment should be utilized.

Lab Result Reviews

When you receive lab reports your investigator will need to document that they have been reviewed. On any abnormal lab value the investigator must document if it is clinically significant.  If it is documented as clinically significant then you will need to create an Adverse Event form. Sometimes the lab is related to other areas of the patient’s medical history so if it is abnormality, it can be documented as such. Great investigator write on the first batch of labs what the abnormal lab value correlates with in subject’s medical history (ie elevated glucose=subject is diabetic) so that reviewers can see that they reviewed all abnormalities seriously.  

Below is a link to a sticker you can attach to printed reports that quickly documents that the investigator has reviewed the labs and then have your investigator sign off on it.   

Abnormal Lab Result Sticker

Find us on Facebook

If you have a disability and experience difficulty accessing this content, please submit an email to [email protected] for assistance.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 07 January 2022

Clinical trials: design, endpoints and interpretation of outcomes

  • Megan Othus   ORCID: orcid.org/0000-0001-8176-6371 1 ,
  • Mei-Jie Zhang 2 &
  • Robert Peter Gale 3  

Bone Marrow Transplantation volume  57 ,  pages 338–342 ( 2022 ) Cite this article

12k Accesses

6 Citations

1 Altmetric

Metrics details

  • Biostatistics
  • Public health

Series Editors Introduction

The ability to properly analyze results of clinical trials, especially randomized controlled trials (RCT), is a needed skill for every physician. This is especially so for those involved in haematopoietic cell transplants. Although seemingly straightforward, correct interpretation of clinical trials data is in reality complex and not for the fainthearted. When a RCT reports intervention A is safer and more effective than intervention B do we simply accept the authours’ conclusion or is more detective work needed. The answer: call in Inspector Clouseau! In this article Prof. Megan Othus and us discuss complexities in clinical trials interpretation including the challenge of false-positive error control, endpoints, power and sample size estimates (more often guesses), how to analyze competing events such as graft- versus -host disease (GvHD) and relapse, what to do when a study has > 1 primary endpoint, analyses of multi-arm trials, how to interpret analyses other than the primary endpoint and what do data from non-inferiority trials tell us. Lastly, we consider, the evil which will not die (the statistical Rasputin): reporting survival outcomes by response. We hope this article will be of practical use to clinicians facing the challenge of correctly interpreting clinical trials data. The good news: only one relatively simple equation. And remember, we can be reached 24/7 on Twitter #BMTStats. Our operators are standing by. Robert Peter Gale MD, PhD, DSc(hc), FACP, FRCP, FRCPI(hon), LHD, DPS Mei-Jie Zhang PhD

Introduction

There are those who reason well, but they are greatly outnumbered by those who reason badly Galileo Galilei

Clinical trials, especially randomized controlled trials, are typically designed to facilitate straightforward interpretation [ 1 ]. However, despite randomization, a formal protocol document and clinical trials registries such as clinicaltrials.gov, it remains challenging to appropriately evaluate reports of clinical trials. Herein, we review several issues regarding critical interpretation of clinical trial results.

False-positive errors

The first topic to discuss is false-positive errors, also called α (alpha) errors. Many potentially convoluted choices are made in the design and presentation of clinical trials data to the end of trying to “control” or hold the false-positive error rate below a specified threshold. Every statistical analysis reporting a p value (or confidence interval, though we will focus on p values for simplicity) and which interprets this value as “significant” (at or below some threshold, often p  < 0.05) or “without evidence of significance” or “not significant” (above this threshold) is potentially subject to an incorrect conclusion (summarized in Table  1 ).

When reporting a p value we can only comment on whether it is “statistically significant” or not. We do not know whether this conclusion is correct or not. But by thoughtful construction of the test and calculations used to derive the p value we can quantify the probability of error. Over many years conventions in clinical research (partly driven by regulatory agency standards which themselves might be driven by legislation) have converged on some typical error rates in clinical trials. In most trials the false-negative error rate is typically selected to be 10–20% resulting in a power of 80–90%. False positive error rates in phase-3 trials are typically controlled to be <5% [ 2 , 3 , 4 , 5 , 6 ] or even <2.5% [ 7 ]. Randomized phase-2 trials often “relax” the false-positive rate to 10–20% [ 8 , 9 , 10 ].

Any one analysis or test has an associated false-positive rate. If more than one test is done, each test has its own rate and we can quantify the overall false-positive rate (the rate of having ≥1 test with a false-positive conclusion). The overall false-positive rate is related to numbers of tests done and false-positive rate of each test. If each test uses the same false positive rate ( α ), we can write the overall false positive rate as:

If we take the common α  = 5% (0.05) then with two tests the overall false positive rate is 9.75%, with 10 tests, 40%, with 20 tests, 64% and with 50 tests, 92%. In an analysis reporting many p values each interpreted individually as significant or not the probability of a false-positive conclusion quickly becomes high. This is why false-positive error control is a major concern in clinical trials design.

If a trial has only one primary endpoint and only one analysis of that endpoint is done the alpha level for that one test will match the overall false positive rate for the trial. However, many clinical trials pre-specify ≥1 endpoints and/or ≥1 analyses. Moreover, often a variety of manipulations are used to control for the false-positive rate across analyses. False-positive errors are further discussed below as different topics intersect with error control in clinical trials.

Endpoints are measures which can be observed or calculated for each subject on a trial. Often these measures are combined together mathematically in various ways to estimate a statistic. All statistics have associated measures of uncertainty. The combination of the statistic and the measure of uncertainty can be used to calculate confidence intervals and p values which we typically use to interpret clinical trials results. There are many possible endpoints but those commonly used in clinical trials of haematopoietic cell transplants are summarized in Table  2 .

Censoring is what distinguishes time-to-event from quantitative endpoints. Quantitative endpoints should be measurable or observed on every subject in a clinical trial whereas time-to-event endpoints may not. For example, if a clinical trial collects data on subjects for 5 years after study-entry and a subject does not die during that interval the trial will not observe the time-to-death for that subject. We know this subject lived ≥5 years and this can be used to evaluate and estimate survival up to 5 years. After 5 years the subject cannot contribute data for estimating or quantifying survival and they are termed censored. Different statistical analyses are needed for time-to-event versus quantitative data to account for censoring.

Regression analyses are an important element of randomized trials analyses even when the primary analysis is not based on regression models. Regression models provide estimates of effect sizes (e.g., odds ratios or hazard ratios), which are important when interpreting the results of trials. In addition, regression analyses allow for adjustments for co-variates not used in randomization stratification. Although randomization is likely to balance most factors across arms it does not guarantee balance without stratification. Regression analyses can allow for more precise estimation of effect sizes when there is an imbalance in a prognostic co-variate across arms.

Power, sample size, and endpoints

In a clinical trial protocol the sample size should have the associated power reported (typically 80–90%). For categorical and quantitative co-variates power is directly related to numbers of subjects enrolled onto the trial. For time-to-event endpoints power is driven by the number of “events” (e.g., for survival, the event is death; for time-to-relap se the event is relapse). Numbers of events are driven by the rate at which they occur, the interval subjects were accrued and how long each subject was observed since study-entry. Typically, clinical trials with time-to-event endpoints specify analyses will be done after a specified number of events are observed. When developing a protocol best efforts are made at making reasonable assumptions (guesses is often a more accurate descriptor) at how soon the event(s) under consideration will be observed. But if the assumptions are wrong for any reason the timing calculated in the protocol will be incorrect and analyses may be done sooner or later than pre-specified. The issues with post-hoc or retrospective power calculations have been well-described elsewhere, but in short, such calculations are not appropriate and should rareley (potentially never) be performed [ 11 ].

Competing events

When a subject can experience >1 event (say relapse and death) and the clinical trial is only interested in the time to one of those events, say time-to-relap se, the other event is called a “competing event.” For most time-to-event endpoints like relapse, death before relapse is a competing event. For example, in a time-to-relapse analysis if a subject dies without relapse we cannot assume they would never have relapsed had they not died. But the subject is also not just censored at time of death as one would do in a survival endpoint analysis because there may be a non-random relationship between death and relapse. For example, there exists a correlation between severity of G v HD and relapse risk (reviewed in Horowitz et al. [ 12 ]). To account for this possibility different analyses are needed to analyze such time-to-event endpoints. The Kaplan–Meier method should not be used [ 13 ]. Instead cumulative incidence rates should be estimated [ 14 , 15 , 16 , 17 ]. Log-rank tests should not be used but rather alternative tests which account for competing risks [ 18 , 19 , 20 , 21 , 22 ].

Multiple primary endpoints

It is increasingly common for clinical trials to specify >1 primary endpoint [ 3 , 4 ]. Why? Clinical trials are expensive and time-consuming and it can be disappointing to complete a trial and conclude there was no benefit in the investigational cohort because the wrong endpoint was specified. To mitigate this concern multiple primary endpoints can be specified before the study begins. However, as we discuss above, testing >1 endpoint “inflates” or increases the overall false-positive rate above the false positive ( α ) rate for each test.

There are several strategies to evaluate >1 endpoint. In order to interpret a trial as “positive” if ≥1 endpoint is significant, the α should be “split” (allocated) across endpoints. The split can be done evenly; for example for a trial with overall α of 5% and two primary endpoints, each could be tested with an α of 2.5% [ 3 ]. However, the split need not be even. For example, a trial could allocate 4% of the α to the 1st endpoint and use the formula (1-0.04)*(1 − α 2 ) = 1 − α = 0.95 to calculate that α 2  = 0.0104 and allocate 1.04% to a 2nd endpoint. Again this must be done before the trial starts. The gain from using this formula versus a simple split of 4 and 1% is small enough such that many trials simply use the simple split [ 4 ]. Alpha can also be split between cohorts or sub-cohorts [ 23 ]. For example, 4% alpha could be allocated to a survival analysis of amongst all subjects in a trial, with the remaining alpha allocated to a biomarker-positive cohort, say a cohort which has a FLT3 mutation in a trial of midostaurin. The “remaining alpha” could be set at 1% but because the biomarker-positive cohort is included in the analysis of the full trial population, the results of the analyses are not independent. Because the analyses are not independent we can test the biomarker-positive cohort at an α level >1% and still control the overall α level at 5%. The correlation depends on the proportion of all events observed in the biomarker-positive cohort. Formulae for this calculation can be implemented in statistical programmes [ 24 ].

An alternative to this α splitting is a fixed-sequence approach. The sequence of tests is pre-specified and each endpoint is tested at the same α level . Testing continues along the sequence until there is a test with a p value >  α , at which point testing stops and no further endpoints in the sequence should be evaluated. Sometimes these tests are described as “carrying forward” the alpha after a significant test. All of the α is “spent” at the first test with p value >  α [ 25 ].

A combination of α splitting and fixed sequence testing can also be done. As numbers of endpoints increases the numbers of ways to allocate the α across endpoints also increases. The specific α allocation can vary between trials.

It is uncommon in transplant studies to have >1 primary endpoint or to require all primary endpoints to be significant in a trial to declare success [ 26 ]. For example, for a design to require a significant association with complete remission and also with survival. The false-positive rate is not inflated in this design because there is only one way to have a positive trial, i.e., in a trial requiring all endpoints to be significantly associated with intervention, each endpoint can be tested at the same α level. Because these designs have increased false-negative error rates compared with designs with a one primary endpoint they have less power and require larger samples.

A single composite endpoint including multiple potential “events” is not uncommon across transplant studies. For example, the endpoint G v HD-relapse-free-survival (GRFS) measures the time until the first event: G v HD, relapse or death. GRFS and similar composite endpoints weight the contributory events equally. If equal weighting of these endpoints is not appropriate, alternative statistics can be used to compare arms in a trial including the win ratio [ 27 ] which evaluates composite endpoints in a fixed hierarchy between matched pairs of subjects and tallies in how many pairs the experimental therapy dies first. If neither subject in the pair dies the second event is compared and so forth. Confidence intervals and p values can be calculated for the win ratio like other statistics. Win ratios can be calculated for individual events and composite lists of events and compared to understand the role each event has in the composite win ratio (see Fig. 2 of Pocock et al. [ 27 ], for an example). We note that acute G v HD alone or as a component of a composite endpoint is problematic because of the lack of definitive diagnostic criteria with substantial inter-observer discordances. Consequently, a clinical trial with acute G v HD as the primary endpoint (either alone or within a composite endpoint) is only definitive when a masked (blinded) randomized design is used.

Multi-arm clinical trials

Multi-arm clinical trials are an efficient way to conduct >1 investigation/evaluation within a protocol. Multi-arm trials can have increased false-positive error rates like trials with >1 primary endpoint because of multiple comparisons. Strategies like those discussed above can be used to control the false-positive rate (e.g. , α splitting; fixed-sequence tests). Some multi-arm trials choose not to control α and use the same α for each comparison. When all comparisons in a multi-arm trial are reported in one report it is straightforward to count numbers of comparisons and calculate the overall false-positive rate [ 28 ]. However, it is unfortunately common for multi-arm trials to report each comparison in separate publications [ 29 , 30 ]. As such, readers need to be aware of the general design when reading and interpreting results of only one comparison within a multi-arm trial.

When comparing two or more interventions added to a backbone, sometimes placebo if there is no standard-of-care , factorial designs can be used to evaluate potential synergy or interactions between the interventions. For example, a multi-arm study of two therapies designated X and Y added to a backbone designated B could have four arms: X + B, Y + B, X + Y + B, and B [ 31 ]. This design allows quantification of the “interaction” between X and Y, namely, are the therapies better or worse together or do they have individual benefits which are additive? [ 32 , 33 ]. These designs are uniquely able to evaluate multiple therapies in this way but can quickly become large and expensive. Some factorial designs assume X and Y are “independent” in the sense that any benefit of X can be evaluated ignoring whether a subject received Y or not. Analyses will then pool data across arms to evaluate X and Y separately. For example, to evaluate X, X + B and X + Y + B are combined and compared with B and Y + B. If the assumption of independence is true this design can lead to a substantial decrease in sample size compared with running separate trials of B + X and B + Y. But if there is a positive or negative synergy or interaction between X and Y results of the trial may be uninterpretable. As such, this trial design assumes no synergy and/or interaction, typically an unproved hypothesis. There will also be too little power to separately evaluate the cohorts because the sample size was selected assuming the cohorts could be pooled [ 34 ].

All the other analyses reported with a clinical trial

Clinical trials typically report analyses other than the pre-specified primary objective or endpoint. Such analyses are often labeled secondary, exploratory, subgroup, or translational analyses. Because of the increased probability of a false-positive conclusion discussed above all secondary objectives and analyses in a clinical trial should interpreted as non-definitive or hypothesis generating. When many “secondary” analyses are provided after the primary endpoint of a trial is not met, the results of any “significant” findings should be viewed with strong skepticism or outright ignored.

Sub-group analyses are common in clinical trials data reporting. Because the power of a comparison is related to the sample size, sub-group comparisons have less power than comparisons of the entire population. Lack of significance ( p value >  α ) in a sub-group does not mean there is no association in the sub-group. It can be a false-negative result because the sample size is too small or for many other reasons [ 35 ]. Interpretation of p values is challenging in general, especially in the context of evaluating multiple subgroups [ 35 ]. In these analyses, reviewing point-estimates and confidence intervals should be the focus. Interpretation of confidence intervals is also challenging; Greenland et al. [ 35 ] provide guidance. As noted above, retrospective or post-hoc power analyses are never appropriate for a subgroup or any analysis [ 11 ]. Sub-group analyses can only be used to assess if there appears to be significant heterogeneity across sub-groups compared with the entire trial population [ 36 , 37 , 38 ]. Forest plots are a way to visualize this. If there appears to be heterogeneity (some subgroups have a benefit and others, not), a definitive evaluation of such a sub-group effect requires validation in a new trial.

Sub-group analyses which are not pre-specified should be viewed skeptically or ignored. If someone evaluates 100 different non-pre-specified subgroups each with an α of 5% we would expect five of these to have a p value < 0.05 even when there is no difference in any of the sub-groups analyzed. This feature of statistical significance testing means that if enough tests are conducted, a significant p value is very likely to be found. Analyses conducted until finding a result with a significant p value are sometimes described as “fishing expeditions.” As noted above, subgroup analyses typically lack power which leads many significant subgroup results to be false-positive results. These issues are why so much emphasis is put on pre-specifying subgroup and other secondary analyses in clinical trials.

Non-inferiority

Randomized clinical trials evaluating whether one therapy is better than another nearly always analyze results using the intent-to-treat (ITT) principle. Subjects are analyzed in their assigned/randomized cohort regardless of the intervention they received. ITT analyses are considered “conservative” in that subjects receiving the alternative (non-assigned) intervention skew or “bias” towards showing no difference between the cohorts. In this instance an ITT analysis may result in the incorrect conclusion an intervention is ineffective, a false-negative. When the primary objective of a trial is to evaluate non-inferiority, an ITT analysis skews the data towards potentially showing non-inferiority. Because of this it is typical in non-inferiority trials to use an as-treated analysis as the primary analysis [ 39 , 40 ].

A critical element of a non-inferiority design is the non-inferiority margin the design will exclude. There are no specific rules on what threshold would warrant a conclusion of “non-inferiority,” though some regulatory agencies have provided guidance in some situations, and margins vary widely across endpoints, patient populations, and trials [ 39 , 40 , 41 , 42 , 43 , 44 ]. Any interpretation of a non-inferiority trial requires the reader to evaluate whether they find the non-inferiority margin selected convincing and of clinical import.

The interpretation of non-inferiority trials (and most clinical trials) is further complicated when endpoints are measured beyond the intervention period, which is (nearly) always the case with a survival endpoint. After the intervention period, there is typically less information on how patients are being treated and followed, and later therapies or interventions are often not randomly or equally allocated across arms. For example, when patients were randomized between lenalidomide and placebo for post-transplant maintenance therapy for multiple myeloma, therapy after failure varies by randomized arm. In many ways, clinical trial analyses with longer-term endpoints should be reviewed as essentially observational database analyses, with the associated caveats in analysis and interpretation [ 1 ].

Survival by response

Difficult as it is to believe, analyses comparing survival of responders versus non-responders remains common despite wide-spread knowledge such analyses are subject to diverse biases [ 45 ]. A critical bias is that a subject must live long enough to respond. This is referred to as guarantee-time or immortal-time bias. Statistical remedies of these bias are described [ 46 ].

This is an education-orientated review of design and correct interpretation of clinical trials data. We discuss issues including multiple endpoints and subgroup analyses. Many of the issues discussed are relevant to the correct interpretation of data from clinical trials of haematopoietic cell transplants.

Zheng C, Dai R, Gale R, Zhang M. Causal inference in randomized clinical trials. Bone Marrow Transpl. 2019;55:4–8.

Article   Google Scholar  

Jabbour J, Manana B, Zahreddine A, Al-Shaar L, Bazarbachi A, Blaise D, et al. Vitamins and minerals intake adequacy in hematopoietic stem cell transplant: results of a randomized controlled trial. Bone Marrow Transpl. 2021;56:1106–15.

Article   CAS   Google Scholar  

Kantarjian HM, DeAngelo DJ, Stelljes M, Martinelli G, Liedtke M, Stock W, et al. Inotuzumab ozogamicin versus standard therapy for acute lymphoblastic leukemia. N Engl J Med. 2016;375:740–53.

DiNardo CD, Jonas BA, Pullarkat V, Thirman MJ, Garcia JS, Wei AH, et al. Azacitidine and venetoclax in previously untreated acute myeloid leukemia. N Engl J Med. 2020;383:617–29.

Kantarjian H, Stein A, Gökbuget N, Fielding AK, Schuh AC, Ribera J-M, et al. Blinatumomab versus chemotherapy for advanced acute lymphoblastic leukemia. N Engl J Med. 2017;376:836–47.

Sanchorawala V, Wright DG, Seldin DC, Falk RH, Finn KT, Dember LM, et al. High-dose intravenous melphalan and autologous stem cell transplantation as initial therapy or following two cycles of oral chemotherapy for the treatment of AL amyloidosis: results of a prospective randomized trial. Bone Marrow Transpl. 2004;33:381–8.

Garderet L, Iacobelli S, Moreau P, Dib M, Lafon I, Niederwieser D, et al. Superiority of the triple combination of bortezomib-thalidomide-dexamethasone over the dual combination of thalidomide-dexamethasone in patients with multiple myeloma progressing or relapsing after autologous transplantation: the MMVAR/IFM 2005-04 Randomized Phase III Trial from the Chronic Leukemia Working Party of the European Group for Blood and Marrow Transplantation. J Clin Oncol. 2012;30:2475–82.

Deininger MW, Kopecky KJ, Radich JP, Kamel‐Reid S, Stock W, Paietta E, et al. Imatinib 800 mg daily induces deeper molecular responses than imatinib 400 mg daily: results of SWOG S0325, an intergroup randomized PHASE II trial in newly diagnosed chronic phase chronic myeloid leukaemia. Br J Haematol. 2014;164:223–32.

Rubinstein L, Crowley J, Ivy P, LeBlanc M, Sargent D. Randomized phase II designs. Clin Cancer Res. 2009;15:1883–90.

Rubinstein LV, Korn EL, Freidlin B, Hunsberger S, Ivy SP, Smith MA. Design issues of randomized phase II trials and a proposal for phase II screening trials. J Clin Oncol. 2005;23:7199–206.

Hoenig JM, Heisey DM. The abuse of power: the pervasive fallacy of power calculations for data analysis. Am Stat. 2001;55:19–24.

Horowitz MM, Gale RP, Sondel PM, Goldman JM, Kersey J, Kolb H-J, et al. Graft-versus-leukemia reactions after Bone Marrow Transpl. 1990;75:555–62.

Gooley TA, Leisenring W, Crowley J, Storer BE. Estimation of failure probabilities in the presence of competing risks: new representations of old estimators. Stat Med. 1999;18:695–706.

Tsiatis A. A nonidentifiability aspect of the problem of competing risks. Proc Natl Acad Sci. 1975;72:20–2.

Mori T, Kikuchi T, Koh M, Koda Y, Yamazaki R, Sakurai M, et al. Cytomegalovirus retinitis after allogeneic hematopoietic stem cell transplantation under cytomegalovirus antigenemia-guided active screening. Bone Marrow Transpl. 2021;56:1266–71.

Al-Kadhimi Z, Gul Z, Abidi M, Lum L, Deol A, Chen W, et al. Low incidence of severe cGvHD and late NRM in a phase II trial of thymoglobulin, tacrolimus and sirolimus for GvHD prevention. Bone Marrow Transpl. 2017;52:1304–10.

DeFilipp Z, Li S, Avigan D, Armand P, Ho VT, Koreth J, et al. A phase II study of reduced intensity double umbilical cord blood transplantation using fludarabine, melphalan, and low dose total body irradiation. Bone Marrow Transpl. 2020;55:804–10.

Gray RJ. A class of K-sample tests for comparing the cumulative incidence of a competing risk. Ann Stat. 1988;16: 1141–54.

Inoue Y, Nakano N, Fuji S, Eto T, Kawakita T, Suehiro Y, et al. Impact of conditioning intensity and regimen on transplant outcomes in patients with adult T-cell leukemia-lymphoma. Bone Marrow Transpl. 2021;31:1–11.

Shimomura Y, Hara M, Konuma T, Itonaga H, Doki N, Ozawa Y, et al. Allogeneic hematopoietic stem cell transplantation for myelodysplastic syndrome in adolescent and young adult patients. Bone Marrow Transpl. 2021;56:1–8.

Inoue Y, Okinaka K, Fuji S, Inamoto Y, Uchida N, Toya T, et al. Severe acute graft-versus-host disease increases the incidence of blood stream infection and mortality after allogeneic hematopoietic cell transplantation: Japanese transplant registry study. Bone Marrow Transpl. 2021;56:1–12.

Jepsen C, Turkiewicz D, Ifversen M, Heilmann C, Toporski J, Dykes J, et al. Low incidence of hemorrhagic cystitis following ex vivo T-cell depleted haploidentical hematopoietic cell transplantation in children. Bone Marrow Transpl. 2020;55:207–214.

Hoering A, LeBlanc M, Crowley JJ. Randomized phase III clinical trial designs for targeted agents. Clin Cancer Res. 2008;14:4358–4367.

Spiessens B, Debois M. Adjusted significance levels for subgroup analyses in clinical trials. Contemp Clin trials. 2010;31:647–656.

Center for Drug Evaluation and Research (CDER) CfBEaRC. Multiple Endpoints in Clinical Trials: Guidane for Industry. U.S. Department of Health and Human Services Food and Drug Administration.

Malladi R, Ahmed I, McIlroy G, Dignan FL, Protheroe R, Jackson A, et al. Azacitidine for the treatment of steroid-refractory chronic graft-versus-host disease: the results of the phase II AZTEC clinical trial. Bone Marrow Transpl. 2021;56:2948–55.

Pocock SJ, Ariti CA, Collier TJ, Wang D. The win ratio: a new approach to the analysis of composite endpoints in clinical trials based on clinical priorities. Eur Heart J. 2012;33:176–82.

Sekeres MA, Othus M, List AF, Odenike O, Stone RM, Gore SD, et al. Randomized phase II study of azacitidine alone or in combination with lenalidomide or with vorinostat in higher-risk myelodysplastic syndromes and chronic myelomonocytic leukemia: North American Intergroup Study SWOG S1117. J Clin Oncol. 2017;35:2745–53. https://doi.org/10.1200/JCO.2015.66.2510 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Winter SS, Dunsmore KP, Devidas M, Wood BL, Esiashvili N, Chen Z, et al. Improved survival for children and young adults with T-lineage acute lymphoblastic leukemia: results from the Children’s Oncology Group AALL0434 methotrexate randomization. J Clin Oncol. 2018;36:2926.

Dunsmore KP, Winter S, Devidas M, Wood BL, Esiashvili N, Eisenberg N, et al. COG AALL0434: a randomized trial testing nelarabine in newly diagnosed t-cell malignancy. J Clin Oncol. 2018;36:10500.

Pettengell R, Uddin R, Boumendil A, Johnson R, Metzner B, Martín A, et al. Durable benefit of rituximab maintenance post-autograft in patients with relapsed follicular lymphoma: 12-year follow-up of the EBMT lymphoma working party Lym1 trial. Bone Marrow Transpl. 2021;56:1413–21.

Milligan DW, Wheatley K, Littlewood T, Craig JI, Burnett AK. Group NHOCS. Fludarabine and cytosine are less effective than standard ADE chemotherapy in high-risk acute myeloid leukemia, and addition of G-CSF and ATRA are not beneficial: results of the MRC AML-HR randomized trial. Blood. 2006;107:4614–22.

Morgan GJ, Gregory WM, Davies FE, Bell SE, Szubert AJ, Brown JM, et al. The role of maintenance thalidomide therapy in multiple myeloma: MRC Myeloma IX results and meta-analysis. Blood J Am Soc Hematol. 2012;119:7–15.

CAS   Google Scholar  

Green S, Liu P-Y, O’Sullivan J. Factorial design considerations. J Clin Oncol. 2002;20:3424–30.

Greenland S, Senn SJ, Rothman KJ, Carlin JB, Poole C, Goodman SN, et al. Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations. Eur J Epidemiol. 2016;31:337–50.

Lagakos SW. The challenge of subgroup analyses-reporting without distorting. N Engl J Med. 2006;354:1667.

Rothwell PM. Subgroup analysis in randomised controlled trials: importance, indications, and interpretation. Lancet. 2005;365:176–86.

Hernández AV, Boersma E, Murray GD, Habbema JDF, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151:257–64.

Jeker B, Farag S, Taleghani BM, Novak U, Mueller BU, Li Q, et al. A randomized evaluation of vinorelbine versus gemcitabine chemotherapy mobilization of stem cells in myeloma patients. Bone Marrow Transpl. 2020;55:2047–51.

Schrappe M, Bleckmann K, Zimmermann M, Biondi A, Möricke A, Locatelli F, et al. Reduced-intensity delayed intensification in standard-risk pediatric acute lymphoblastic leukemia defined by undetectable minimal residual disease: results of an International Randomized Trial (AIEOP-BFM ALL 2000). J Clin Oncol. 2017;36:244–53.

Johansson J-E, Bratel J, Hardling M, Heikki L, Mellqvist U-H, Hasséus B. Cryotherapy as prophylaxis against oral mucositis after high-dose melphalan and autologous stem cell transplantation for myeloma: a randomised, open-label, phase 3, non-inferiority trial. Bone Marrow Transpl. 2019;54:1482–8.

Kanda Y, Kobayashi T, Mori T, Tanaka M, Nakaseko C, Yokota A, et al. A randomized controlled trial of cyclosporine and tacrolimus with strict control of blood concentrations after unrelated bone marrow transplantation. Bone Marrow Transpl. 2016;51:103–9.

Center for Drug Evaluation and Research (CDER) CfBEaRC. Non-inferiority clinical trials to establish effectiveness: guidance for industry. U.S. Department of Health and Human Services Food and Drug Administration.

Oncology Center of Excellence CfDEaRC, Center for Biologics Evaluation and Research (CBER). Clinical Trial endpoints for the approval of cancer drugs and biologics: guidance for industry. Silver Spring: U.S. Department of Health and Human Services Food and Drug Administration; 2018.

Kröger N, Sockel K, Wolschke C, Bethge W, Schlenk RF, Wolf D, et al. Comparison between 5-Azacytidine treatment and allogeneic stem-cell transplantation in elderly patients with advanced MDS according to donor availability (VidazaAllo study). J Clin Oncol. 2021;39:3318–27.

Anderson JR, Cain KC, Gelber RD. Analysis of survival by tumor response. J Clin Oncol. 1983;1:710–9.

Download references

Acknowledgements

MO acknowledges support from the National Cancer Institute (NCI) grant U10CA180819. MJZ acknowledges support from the National Institutes of Health (NCI, NHLBI) and Health Resources and Services Administration (HRSA). RPG acknowledges support from the National Institute of Health Research (NIHR) Biomedical Research Centre funding scheme.

Author information

Authors and affiliations.

Division of Public Health, Fred Hutchinson Cancer Research Center, Seattle, WA, USA

Megan Othus

Division of Biostatistics, Medical College of Wisconsin, Milwaukee, WI, USA

Mei-Jie Zhang

Haematology Research Centre, Department of Immunology and Inflammation, Imperial College London, London, UK

  • Robert Peter Gale

You can also search for this author in PubMed   Google Scholar

Contributions

MO wrote the initial typescript,. MJZ and RPG reviewed and provided comments. All authours accept responsibly for the content of the final typescript and agree to submit for publication.

Corresponding author

Correspondence to Megan Othus .

Ethics declarations

Conmpeting interests.

MO is a consultant for Daiichi Sankyo, Biosight, and Merck and is on independent data safety monitoring boards for Celgene and Glycomimetics. RPG is a consultant to BeiGene Ltd., Fusion Pharma LLC, LaJolla NanoMedical Inc., Mingsight Parmaceuticals Inc. CStone Pharmaceuticals, NexImmune Inc. and Prolacta Bioscience; advisor to Antengene Biotech LLC, Medical Director, FFF Enterprises Inc.; partner, AZAC Inc.; Board of Directors, Russian Foundation for Cancer Research Support; and Scientific Advisory Board: StemRad Ltd.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Othus, M., Zhang, MJ. & Gale, R.P. Clinical trials: design, endpoints and interpretation of outcomes. Bone Marrow Transplant 57 , 338–342 (2022). https://doi.org/10.1038/s41409-021-01542-0

Download citation

Received : 08 November 2021

Revised : 12 November 2021

Accepted : 22 November 2021

Published : 07 January 2022

Issue Date : March 2022

DOI : https://doi.org/10.1038/s41409-021-01542-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Is unmeasurable residual disease (umrd) the best surrogate endpoint for clinical trials, regulatory approvals and therapy decisions in chronic lymphocytic leukaemia (cll).

  • Shenmiao Yang
  • Neil E. Kay

Leukemia (2022)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

visit types in clinical trials

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • Health Topics
  • Brochures and Fact Sheets
  • Help for Mental Illnesses
  • Clinical Trials

Clinical Research Trials and You: Questions and Answers

Clinical Research Trials and You: Questions and Answers cover image

  • Download PDF
  • Order a free hardcopy

What is a clinical trial?

A clinical trial is a research study that involves people like you. Researchers conduct clinical trials to find new or better ways to prevent, detect, or treat health conditions. Often, researchers want to find out if a new test, treatment, or preventive measure is safe and effective. Tests can include ways to screen for, diagnose, or prevent a disease or condition. Treatments and preventive measures can include medications, surgeries, medical devices, and behavioral therapies.

Clinical trials are important because they serve as the foundation for most medical advances. Without clinical trials, many of the medical treatments and cures we have today wouldn’t exist.

Why should I volunteer for a clinical trial?

People volunteer for clinical trials for many reasons. Some want to advance science or help doctors and researchers learn more about disease and improve health care. Others, such as those with an illness, may join to try new or advanced treatments that aren’t widely available.

Whatever your reason for joining a clinical trial, researchers generally need two types of volunteers: those without specific illnesses or conditions and those with them.  

A healthy volunteer is someone in a clinical trial with no known related health problems. Researchers need healthy volunteers to establish a healthy or optimal reference point. They use data from healthy volunteers to test new treatments or interventions, not to provide direct benefit to participants.

A patient volunteer is someone in a clinical trial who has the condition being studied. Researchers need patient volunteers to learn if new tests, treatments, or preventive measures are safe and effective. Not all trial participants will receive experimental medications or treatments; sometimes, participants may receive a placebo. Researchers need to vary medications and treatments so they can compare results and learn from their differences.

While a study’s treatment or findings may help patients directly, sometimes participants will receive no direct benefit. However, in many cases, study results can still serve as building blocks that are used to help people later.

What would I experience during a clinical trial?

During a clinical trial, the study team will track your health. Participating in a clinical trial may take more time than standard treatment, and you may have more tests and treatments than you would if you weren’t in a clinical trial. The study team also may ask you to keep a log of symptoms or other health measures, fill out forms about how you feel, or complete other tasks. You may need to travel or reside away from home to take part in a study.

What are the risks and benefits of my participation in a clinical trial?

Clinical trials can provide many benefits to participants and society. However, before volunteering for a clinical trial, you should talk with your health care provider and the study team about the risks and benefits.

Potential Risks

When weighing the risks of volunteering, you should consider:

  • The likelihood of any harm occurring
  • How much harm could result from your participation in the study

Researchers try to limit patient discomfort during clinical trials. However, in some cases, volunteers have complications that require medical attention. In rare cases, volunteers have died when participating in clinical trials.

Potential Benefits

The benefits of volunteering can include:

  • Treatment with study medications that may not be available elsewhere
  • Care from health care professionals who are familiar with the most advanced treatments available
  • The opportunity to learn more about an illness and how to manage it
  • Playing an active role in your health care
  • Helping others by contributing to medical research

Where can I find a mental health clinical trial?

The National Institute of Mental Health (NIMH) is the lead federal agency for research on mental disorders. While NIMH supports research around the world, it also conducts many clinical trials at the National Institutes of Health (NIH) campus in Bethesda, Maryland.

To learn more about NIMH studies conducted on the NIH campus, visit  NIMH's Join a Study webpage . These studies enroll volunteers from the local area and across the nation. In some cases, participants receive free study-related evaluations, treatment, and transportation to NIH.

To learn more about NIMH-funded clinical trials at universities, medical centers, and other institutions, visit  NIMH's clinical trials webpage .

What is the next step after I find a clinical trial?

To learn more about a specific clinical trial, contact the study coordinator. You can usually find this contact information in the trial’s description.

If you decide to join a clinical trial, let your health care provider know. They may want to talk to the study team to coordinate your care and ensure the trial is safe for you. Find tips to help prepare for and get the most out of your visit .

How do I know if I can join a clinical trial?

People of all ages, ethnicities, and racial backgrounds can volunteer for clinical trials. If you want to join a clinical trial, you must be eligible to participate in that specific trial. Your eligibility can usually be determined by phone or online screening.

All clinical trials have eligibility guidelines called inclusion and exclusion criteria. These criteria may include:

  • The type and stage of an illness
  • Treatment history
  • Other medical conditions

Researchers use these guidelines to find suitable study participants, maximize participant safety, and ensure trial data are accurate.

What kinds of questions should I ask the study team before deciding if I want to take part in a clinical trial?

It can be helpful to write down any questions or concerns you have. When you speak with the study team, you may want to take notes or ask to record the conversation. Bringing a supportive friend or family member may also be helpful.

The following topics may give you some ideas for questions to ask:

  • The study’s purpose and duration
  • The possible risks and benefits
  • Your participation and care
  • Personal and cost concerns

For a list of specific questions, check out Questions to Ask About Volunteering for a Research Study  from the U.S. Department of Health and Human Services’ Office for Human Research Protections.

How is my safety protected if I choose to take part in a clinical trial?

Strict rules and laws help protect participants in research studies, and the study team must follow these rules to conduct research. Below are some measures that can help ensure your safety.  

Ethical Guidelines

Ethical guidelines protect volunteers and ensure a study’s scientific integrity. Regulators created these guidelines primarily in response to past research errors and misconduct. Federal policies and regulations require that researchers conducting clinical trials obey these ethical guidelines.

Informed Consent

Before joining a trial, you should understand what your participation will involve. The study team will provide an informed consent document with detailed information about the study. The document will include details about the length of the trial, required visits, medications, and medical procedures. It will also explain the expected outcomes, potential benefits, possible risks, and other trial details. The study team will review the informed consent document with you and answer any questions you have. You can decide then or later if you want to take part in the trial.

If you choose to join the trial, you will be asked to sign the informed consent document. This document is not a contract; it verifies you understand the study and describes what your participation will include and how your data will be used. Your consent in a clinical trial is ongoing and your participation is voluntary. You may stop participating at any time.

Institutional Review Board Review

Institutional review boards (IRBs) review and monitor most clinical trials in the United States. An IRB works to protect the rights, welfare, and privacy of human subjects. An IRB usually includes a team of independent doctors, scientists, and community members. The IRB’s job is to review potential studies, weigh the risks and benefits of studies, and ensure that studies are safe and ethical.

If you’re thinking about volunteering for a clinical trial, ask if an IRB reviewed the trial.

What happens when a clinical trial ends?

When a clinical trial ends, researchers will analyze the data to help them determine the results. After reviewing the findings, researchers often submit them to scientific journals for others to review and build on.

Before your participation ends, the study team should tell you if and how you’ll receive the results. If this process is unclear, be sure to ask about it.

Where can I find more information?

This fact sheet covers the basics of clinical trials. To find more details and resources, visit  NIMH's clinical trials webpage .

For More Information

MedlinePlus  (National Library of Medicine) ( en español  )

ClinicalTrials.gov  ( en español  )

U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health NIH Publication No. 23-MH-4379 Revised 2023

The information in this publication is in the public domain and may be reused or copied without permission. However, you may not reuse or copy images. Please cite the National Institute of Mental Health as the source. Read our copyright policy to learn more about our guidelines for reusing NIMH content.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Cochrane Database Syst Rev

Monitoring strategies for clinical intervention studies

Trial monitoring is an important component of good clinical practice to ensure the safety and rights of study participants, confidentiality of personal information, and quality of data. However, the effectiveness of various existing monitoring approaches is unclear. Information to guide the choice of monitoring methods in clinical intervention studies may help trialists, support units, and monitors to effectively adjust their approaches to current knowledge and evidence.

To evaluate the advantages and disadvantages of different monitoring strategies (including risk‐based strategies and others) for clinical intervention studies examined in prospective comparative studies of monitoring interventions.

Search methods

We systematically searched CENTRAL, PubMed, and Embase via Elsevier for relevant published literature up to March 2021. We searched the online 'Studies within A Trial' (SWAT) repository, grey literature, and trial registries for ongoing or unpublished studies.

Selection criteria

We included randomized or non‐randomized prospective, empirical evaluation studies of different monitoring strategies in one or more clinical intervention studies. We applied no restrictions for language or date of publication.

Data collection and analysis

We extracted data on the evaluated monitoring methods, countries involved, study population, study setting, randomization method, and numbers and proportions in each intervention group. Our primary outcome was critical and major monitoring findings in prospective intervention studies. Monitoring findings were classified according to different error domains (e.g. major eligibility violations) and the primary outcome measure was a composite of these domains. Secondary outcomes were individual error domains, participant recruitment and follow‐up, and resource use. If we identified more than one study for a comparison and outcome definitions were similar across identified studies, we quantitatively summarized effects in a meta‐analysis using a random‐effects model. Otherwise, we qualitatively summarized the results of eligible studies stratified by different comparisons of monitoring strategies. We used the GRADE approach to assess the certainty of the evidence for different groups of comparisons.

Main results

We identified eight eligible studies, which we grouped into five comparisons.

1. Risk‐based versus extensive on‐site monitoring: based on two large studies, we found moderate certainty of evidence for the combined primary outcome of major or critical findings that risk‐based monitoring is not inferior to extensive on‐site monitoring. Although the risk ratio was close to 'no difference' (1.03 with a 95% confidence interval [CI] of 0.81 to 1.33, below 1.0 in favor of the risk‐based strategy), the high imprecision in one study and the small number of eligible studies resulted in a wide CI of the summary estimate. Low certainty of evidence suggested that monitoring strategies with extensive on‐site monitoring were associated with considerably higher resource use and costs (up to a factor of 3.4). Data on recruitment or retention of trial participants were not available.

2. Central monitoring with triggered on‐site visits versus regular on‐site visits: combining the results of two eligible studies yielded low certainty of evidence with a risk ratio of 1.83 (95% CI 0.51 to 6.55) in favor of triggered monitoring intervention. Data on recruitment, retention, and resource use were not available.

3. Central statistical monitoring and local monitoring performed by site staff with annual on‐site visits versus central statistical monitoring and local monitoring only: based on one study, there was moderate certainty of evidence that a small number of major and critical findings were missed with the central monitoring approach without on‐site visits: 3.8% of participants in the group without on‐site visits and 6.4% in the group with on‐site visits had a major or critical monitoring finding (odds ratio 1.7, 95% CI 1.1 to 2.7; P = 0.03). The absolute number of monitoring findings was very low, probably because defined major and critical findings were very study specific and central monitoring was present in both intervention groups. Very low certainty of evidence did not suggest a relevant effect on participant retention, and very low‐quality evidence indicated an extra cost for on‐site visits of USD 2,035,392. There were no data on recruitment.

4. Traditional 100% source data verification (SDV) versus targeted or remote SDV: the two studies assessing targeted and remote SDV reported findings only related to source documents. Compared to the final database obtained using the full SDV monitoring process, only a small proportion of remaining errors on overall data were identified using the targeted SDV process in the MONITORING study (absolute difference 1.47%, 95% CI 1.41% to 1.53%). Targeted SDV was effective in the verification of source documents but increased the workload on data management. The other included study was a pilot study which compared traditional on‐site SDV versus remote SDV and found little difference in monitoring findings and the ability to locate data values despite marked differences in remote access in two clinical trial networks. There were no data on recruitment or retention.

5. Systematic on‐site initiation visit versus on‐site initiation visit upon request: very low certainty of evidence suggested no difference in retention and recruitment between the two approaches. There were no data on critical and major findings or on resource use.

Authors' conclusions

The evidence base is limited in terms of quantity and quality. Ideally, for each of the five identified comparisons, more prospective, comparative monitoring studies nested in clinical trials and measuring effects on all outcomes specified in this review are necessary to draw more reliable conclusions. However, the results suggesting risk‐based, targeted, and mainly central monitoring as an efficient strategy are promising. The development of reliable triggers for on‐site visits is ongoing; different triggers might be used in different settings. More evidence on risk indicators that identify sites with problems or the prognostic value of triggers is needed to further optimize central monitoring strategies. In particular, approaches with an initial assessment of trial‐specific risks that need to be closely monitored centrally during trial conduct with triggered on‐site visits should be evaluated in future research.

Plain language summary

New monitoring strategies for clinical trials

Our question

We reviewed the evidence on the effects of new monitoring strategies on monitoring findings, participant recruitment, participant follow‐up, and resource use in clinical trials. We also summarized the different components of tested strategies and qualitative evidence from process evaluations.

Monitoring a clinical trial is important to ensure the safety of participants and the reliability of results. New methods have been developed for monitoring practices but further assessments of these new methods are needed to see if they do improve effectiveness without being inferior to established methods in terms of patient rights and safety, and quality assurance of trial results. We reviewed studies that examined this question within clinical trials, i.e. studies comparing different monitoring strategies used in clinical trials.

Study characteristics

We included eight studies which covered a variety of monitoring strategies in a wide range of clinical trials, including national and large international trials. They included primary (general), secondary (specialized), and tertiary (highly specialized) health care. The size of the studies ranged from 32 to 4371 participants at one to 196 sites.

Key results

We identified five comparisons. The first comparison of risk‐based monitoring versus extensive on‐site monitoring found no evidence that the risk‐based approach is inferior to extensive on‐site monitoring in terms of the proportion of participants with a critical or major monitoring finding not identified by the corresponding method, while resource use was three‐ to five‐fold higher with extensive on‐site monitoring. For the second comparison of central statistical monitoring with triggered on‐site visits versus regular (untriggered) on‐site visits, we found some evidence that central statistical monitoring can identify sites in need of support by an on‐site monitoring intervention. In the third comparison, the evaluation of adding an on‐site visit to local and central monitoring revealed a high percentage of participants with major or critical monitoring findings in the on‐site visit group, but low numbers of absolute monitoring findings in both groups. This means that without on‐site visits, some monitoring findings will be missed, but none of the missed findings had any serious impact on patient safety or the validity of the trial's results. In the fourth comparison, two studies assessed new source data verification processes, which are used to check that data recorded within the trial Case Report Form (CRF) match the primary source data (e.g. medical records), and reported little difference to full source data verification processes for the targeted as well as for the remote approach. In the fifth comparison, one study showed no difference in participant recruitment and participant follow‐up between a monitoring approach with systematic initiation visits versus an approach with initiation visits upon request by study sites.

Certainty of evidence

We are moderately certain that risk‐based monitoring is not inferior to extensive on‐site monitoring with respect to critical and major monitoring findings in clinical trials. For the remaining body of evidence, there is low or very low certainty in results due to imprecision, small number of studies, or high risk of bias. Ideally, for each of the five identified comparisons, more high‐quality monitoring studies that measure effects on all outcomes specified in this review are necessary to draw more reliable conclusions.

Summary of findings

Summary of findings 1.

a Downgraded one level due to the imprecision of the summary estimate with the 95% confidence interval including the substantial advantages and disadvantages with the risk‐based monitoring intervention. b Downgraded two levels due to substantial imprecision; there were no confidence intervals for either of the two estimates on resource use provided in the ADAMON and OPTIMON studies and the two estimates could not be combined due to the nature of the estimate (resource use versus cost calculation).

Summary of findings 2

a Downgraded one level because both studies were not randomized, and downgraded one level for imprecision.

Summary of findings 3

a Downgraded one level because the estimate was based on a small number of events and because the estimate stemmed from a single study nested in a single trial (indirectness). b Downgraded three levels because the 95% confidence interval of the estimate allowed for substantial benefit as well as substantial disadvantages with the intervention and there was only a small number of events (serious imprecision); in addition, the estimate stemmed from a single study nested in a single trial (indirectness). c Downgraded three levels because the estimate was not accompanied by a confidence interval (imprecision) and because the estimate stemmed from a single study nested in a single trial (indirectness).

Summary of findings 4

a Downgraded two levels because randomization was not blinded in one of the studies and the outcomes of the two studies could not be combined. b Downgraded by one additional level in addition to (a) for imprecision because there were no confidence intervals provided.

Summary of findings 5

a Downgraded three levels because of substantial imprecision (relevant advantages and relevant disadvantages were plausible given the small amount of data), and indirectness (a single study nested in a single trial).

b We downgraded by one additional level in addition to (a) for imprecision due to the small number of events.

Trial monitoring is important for the integrity of clinical trials, the validity of their results, and the protection of participant safety and rights. The International Council on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) for Good Clinical Practice (GCP) formulated several requirements for trial monitoring ( ICH 1996 ). However, the effectiveness of various existing monitoring approaches was unclear. Source data verification (SDV) during monitoring visits was estimated to use up to 25% of the sponsor's entire clinical trial budget, even though the association between data quality or participant safety and the extent of monitoring and SDV has not been clearly demonstrated ( Funning 2009 ). Consistent application of intensive on‐site monitoring creates financial and logistical barriers to the design and conduct of clinical trials, with no evidence of participant benefit or increase in the quality of clinical research ( Baigent 2008 ;  Duley 2008 ;  Embleton‐Thirsk 2019 ;  Hearn 2007 ;  Tudur Smith 2012a ;  Tudur Smith 2014 ).

Recent developments at international bodies and regulatory agencies such as the European Medicines Agency (EMA), the Organisation for Economic Co‐operation and Development (OECD), the European Commission (EC) and the Food and Drug Administration (FDA), as well as the 2016 addendum to ICH E6 GCP have supported the need for risk‐proportionate approaches to clinical trial monitoring and overall trial management ( EC 2014 ;  EMA 2013 ;  FDA 2013 ;  ICH 2016 ;  OECD 2013 ). This has encouraged study sponsors to implement risk assessments in their monitoring plans and to use alternative monitoring approaches. There are several publications reporting on the experience of using a risk‐based monitoring approach, often including central monitoring, in specific clinical trials ( Edwards 2014 ;  Heels‐Ansdell 2010 ;  Valdés‐Márquez 2011 ). The main idea is to focus monitoring on trial‐specific risks to the integrity of the research and to essential GCP objectives, that is, risks that threaten the safety, rights, and integrity of trial participants; the safety and confidentiality of their data; or the reliable report of the trial results ( Brosteanu 2017a ). The conduct of 'lower risk' trials (lower risk for study participants) — which optimize the use of already authorized medicinal products, validated devices, implemented interventions, and interventions formally outside of the clinical trials regulations — may particularly benefit from a risk‐based approach to clinical trial monitoring in terms of timely completion and cost efficiency. Such 'lower risk' trials are often investigator‐initiated or academic ‐ sponsored clinical trials conducted in the academic setting ( OECD 2013 ). Different risk assessment strategies for clinical trials have been developed, with the objective of defining risk‐proportionate monitoring plans ( Hurley 2016 ). There is no standardized approach for examining the baseline risk of a trial. However, risk assessment approaches evaluate risks associated with the safety profile of the investigational medicinal product (IMP), the phase of the clinical trial, and the data collection process. Based on a prior risk assessment, a study‐specific combination of central/centralized and on‐site monitoring might be effective. Centralized monitoring, also referred to as central monitoring, is defined as any monitoring processes that are not performed at the study site ( FDA 2013 ), and includes remote monitoring processes. Central data monitoring is based on the evaluation of electronically available study data in order to identify study sites with poor data quality or problems in trial conduct ( SCTO 2020 ;  Venet 2012 ), whereas on‐site monitoring comprises site inspection, investigator/staff contact, SDV, observation of study procedures, and the review of regulatory elements of a trial. Central statistical monitoring (including plausibility checks of values for different variables, for instance) is an integral part of central data monitoring ( SCTO 2020 ), but this term is sometimes used interchangeably with central data monitoring. The OECD classifies risk assessment strategies into stratified approaches and trial‐specific approaches, and proposes a harmonized two‐pronged strategy based on internationally validated tools for risk assessment and risk mitigation ( OECD 2013 ). The effectiveness of these new risk‐based approaches in terms of quality assurance, patient rights and safety, and reduction of cost, needs to be empirically assessed. We examined the risk‐based monitoring approach followed at our own institution (the Clinical Trial Unit and Department of Clinical Research, University Hospital Basel, Switzerland) using mixed methods ( von Niederhausern 2017 ). In addition, several prospective studies evaluating different monitoring strategies have been conducted. These include ADAMON (ADApted MONitoring study;  Brosteanu 2017a  ), OPTIMON (Optimisation of Monitoring for Clinical Research Studies;  Journot 2015 ), TEMPER (TargetEd Monitoring: Prospective Evaluation and Refinement;  Stenning 2018a ), START Monitoring Substudy (Strategic Timing of AntiRetroviral Treatment;  Hullsiek 2015 ;  Wyman Engen 2020 ), and MONITORING ( Fougerou‐Leurent 2019 ).

Description of the methods being investigated

Traditional trial monitoring consists of intensive on‐site monitoring strategies comprising frequent on‐site visits and up to 100% SDV. Risk‐based monitoring is a new strategy that recognizes that not all clinical trials require the same approach to quality control and assurance ( Stenning 2018a ), and allows for stratification based on risk indicators assessed during the trial or before it starts. Risk‐based strategies differ in their risk assessment approaches as well as in their implementation and extent of on‐site and central monitoring components. They are also referred to as risk‐adapted or risk‐proportionate monitoring strategies. In this review, which is based on our published protocol ( Klatte 2019 ), we investigated the effects of monitoring methods on ensuring patient rights and safety, and the validity of trial data. These key elements of clinical trial conduct are assessed by monitoring for critical or major violation of GCP objectives, according to the classification of GCP findings described in  EMA 2017 .

Monitoring strategies empirically evaluated in studies

All the monitoring strategies eligible for this review introduced new methods that might be effective in directing monitoring components and resources guided by a risk evaluation or prioritization.

1. Risk‐based monitoring strategies

The risk‐based strategy proposed by Brosteanu and colleagues is based on an initial assessment of the risk associated with an individual trial protocol (ADAMON:  Brosteanu 2009 ). The implementation of this three‐level risk assessment focuses on critical data and procedures describing the risk associated with a therapeutic intervention and incorporates an assessment of indicators for patient‐related risks, indicators of robustness, and indicators for site‐related risks. Trial‐specific risk analysis then informs a monitoring plan that contains on‐site elements as well as central and statistical monitoring methods to a different extent corresponding to the judged risk level. The consensus risk‐assessment scale (RAS) and risk‐adapted monitoring plan (RAMP) developed by Journot and colleagues in 2010 consists of a four‐level initial risk assessment, leading to monitoring plans of four levels of intensity (OPTIMON;  Journot 2011 ). The optimized monitoring strategy concentrates on the main scientific and regulatory aspects, compliance with requirements for patient consent and serious adverse events (SAE), and the frequency of serious errors concerning the validity of the trial's main results and the trial's eligibility criteria ( Chene 2008 ). Both strategies incorporate central monitoring methods that help to specify the monitoring intervention for each study site within the framework of their assigned risk level.

2. Central monitoring with triggered on‐site visits

The triggered on‐site monitoring strategy suggested by the Medicines and Healthcare products Regulatory Agency, Medical Research Council (MRC), and UK Department of Health includes an initial risk assessment on the basis of the intervention and design of the trial and a resulting monitoring plan for different trial sites that is continuously updated through centralized monitoring. Over the course of a clinical trial, sites are prioritized for on‐site visits based on predefined central monitoring triggers ( Meredith 2011 ; TEMPER:  Stenning 2018a ).

3. Central and local monitoring

A strategy that is mainly based on central monitoring, combined with a local quality control provided by qualified personnel on‐site, is being evaluated in the START Monitoring Substudy ( Hullsiek 2015 ). In this study, continuous central monitoring uses descriptive statistics on the consistency and quality of the data and data completeness. Semi‐annual performance reports are generated for each site, focusing on the key variables/endpoints regarding patients' safety (SAEs, eligibility violations) and data quality. This evaluates whether adding on‐site monitoring to these procedures leads to differences in the participant‐level composite outcome of monitoring findings.

4. Monitoring with targeted or remote source data verification

The monitoring strategy developed for the MONITORING study is characterized by a targeted SDV in which only regulatory and scientific key data are verified ( Fougerou‐Leurent 2019 ). This strategy is compared to full SDV and assessed based on final data quality and costs. One pilot study assessed a new strategy of remote SDV where documents were accessed via electronic health records, clinical data repositories, web‐based access technologies, or authentication and auditing tools ( Mealer 2013 ).

5. On‐site initiation visits upon request

In this monitoring strategy, systematic initiation visits at all sites are replaced by initiation visits that take place only upon investigators' request at a site ( Liènard 2006 ).

How these methods might work

The intention for risk‐based monitoring methods is to increase the efficiency of monitoring and to optimize resource use by directing the amount and content of monitoring visits according to an initially assessed risk level of an individual trial. These new methods should be at least non‐inferior in detecting major or critical violation of essential GCP objectives, according to  EMA 2017 , and might even be superior in terms of prioritizing monitoring content. The risk assessment preceding the risk‐based monitoring plan should consider the likelihood of errors occurring in key aspects of study performance, and the anticipated effect of such errors on the protection of participants and the reliability of the trial's results ( Landray 2012 ). Trials within a certain risk category are initially assigned to a defined monitoring strategy which remains adjustable throughout the conduct of the trial and should always match the needs of the trial and specific trial sites. This flexibility is an advantage, considering the heterogeneity of study designs and participating trial sites. Central monitoring would also allow for continuous verification of data quality based on prespecified triggers and thresholds, and would enable early intervention in cases of procedural or data‐recording errors. Besides the detection of missing or invalid data, trial entry procedures and protocol adherence, as well as other performance indicators, can be monitored through a continuous analysis of electronically captured data ( Baigent 2008 ). In addition, comparison with external sources may be undertaken to validate information contained in the data set; and the identification of poorly performing sites would ensure a more targeted application of on‐site monitoring resources. Use of methods that take advantage of the increasing use of electronic systems (e.g. electronic case report forms [eCRFs]) may allow data to be checked by automated means and allows the application of entry rules supporting up‐to‐date, high‐quality data. These methods would also ensure patient rights and safety while simultaneously improving trial management and optimizing trial conduct. Adaptations in the monitoring approach toward a reduction of on‐site monitoring visits, provided that patient rights and safety are ensured, could allow the application of resources to the most crucial components of the trial ( Journot 2011 ).

In order to evaluate whether these new risk‐based monitoring approaches are non‐inferior to the traditional extensive on‐site monitoring, an assessment of differences in critical and major findings during monitoring activities is essential. Monitoring findings are determined with respect to patient safety, patient rights, and reliability of the data, and classified as critical and major according to the classification of GCP findings described in the Procedures for reporting of GCP inspections requested by the Committee for Medicinal Products for Human Use ( EMA 2017 ). Critical findings are conditions, practices, or processes that adversely affect the rights, safety, or well‐being of the participants or the quality and integrity of data. Major findings are conditions, practices, or processes that might adversely affect the rights, safety, or well‐being of the participants or the quality and integrity of data.

Why it is important to do this review

There is insufficient information to guide the choice of monitoring approaches consistent with GCP to use in any given trial, and there is a lack of evidence on the effectiveness of suggested monitoring approaches. This has resulted in high heterogeneity in the monitoring practices used by research institutions, especially in the academic setting ( Morrison 2011 ). A guideline describing which type of monitoring strategy is most effective for clinical trials in terms of patient rights and safety, and data quality, is urgently needed for the academic clinical trial setting. Evaluating the benefits and disadvantages of different risk‐based monitoring strategies, incorporating components of central or targeted and triggered (or both) monitoring versus intensive on‐site monitoring, might lead to a consensus on how effective these new approaches are. In addition, evaluating the evidence of effectiveness could provide information on the extent to which on‐site monitoring content (such as SDV or frequency of site visits) can be adapted or supported by central monitoring interventions. In this review, we explored whether monitoring that incorporates central (including statistical) components could be extended to support the overall management of study quality in terms of participant recruitment and follow‐up.

The risk‐based monitoring interventions that are eligible for this review incorporate on‐site and central monitoring components, which may vary extent and procedural structure. In line with the recommendation from the Clinical Trials Transformation Initiative ( Grignolo 2011 ), it is crucial to systematically analyze and compare the existing evidence so that best practices may be established. This review may facilitate the sharing of current knowledge on effective monitoring strategies, which would help trialists, support units, and monitors to choose the best strategy for their trials. Evaluation of the impact of a change of monitoring approaches on data quality and study cost is relevant for the effective adjustment of current monitoring strategies. In addition, evaluating the effectiveness of these new monitoring approaches in comparison with intensive on‐site monitoring might reveal possible methods to replace or support on‐site monitoring strategies by taking advantage of the increasing use of electronic systems and resulting opportunities to implement statistical analysis tools.

Criteria for considering studies for this review

Types of studies.

We included randomized or non‐randomized prospective, empirical evaluation studies that assessed monitoring strategies in one or more clinical intervention studies. These types of embedded studies have recently been called 'studies within a trial' (SWATs) ( Anon 2012 ;  Treweek 2018a ). We excluded retrospective studies because of their limitations with respect to outcome standardization and variable definitions.

We followed the Cochrane Effective Practice and Organisation of Care (EPOC) Group definitions for the eligible study designs ( EPOC 2016 ).

We applied no restrictions on language or date of publication.

Types of data

We extracted information about monitoring processes as well as evaluations of the comparison and advantages/disadvantages of different monitoring approaches. We included data from published and unpublished studies, and grey literature, that compared different monitoring strategies (e.g. standard monitoring versus a risk‐based approach).

Study characteristics of interest were:

  • monitoring interventions;
  • risk assessment characteristics;
  • finding rates of serious/critical audits;
  • impact on participant recruitment and follow‐up; and

Types of methods

We included studies that compared:

  • a risk‐based monitoring strategy versus an intensive on‐site monitoring strategy for prospective intervention studies; or
  • any other prospective comparison of monitoring strategies for intervention studies.

Types of outcome measures

Specific outcome measures were not part of the eligibility criteria.

Primary outcomes

  • Combined outcome of critical and major monitoring findings in prospective intervention studies. Different error domains of critical and major monitoring findings were combined in the primary outcome measure (eligibility violations, informed‐consent violations, findings that raise doubt about the accuracy or credibility of key trial data and deviations of intervention from the trial protocol, errors in endpoint assessment, and errors in SAE reporting).

Critical and major findings were defined according to the classification of GCP findings described in  EMA 2017  , as follows.

  • Critical findings: conditions, practices, or processes that adversely affected the rights, safety, or well‐being of the study participants or the quality and integrity of data. Observations classified as critical may have included a pattern of deviations classified either as major, or bad quality of the data or absence of source documents (or both). Manipulation and intentional misrepresentation of data was included in this group.
  • Major findings: conditions, practices, or processes that might adversely affect either the rights, safety, or well‐being of the study participants or the quality and integrity of data (or both). Major observations are serious deficiencies and are direct violations of GCP principles. Observations classified as major may have included a pattern of deviations or numerous minor observations (or both).

Our protocol stated definitions of combined outcomes of critical and major findings in the respective studies ( Table 6 ) ( Klatte 2019 ).

ART: antiretroviral therapy; CTU: clinical trials unit; GCP: good clinical practice; IRB: institutional review board; SAE: serious adverse event; TSM: trial supply management.

Secondary outcomes

  • major eligibility violations;
  • major informed‐consent violations;
  • findings that raised doubt about the accuracy or credibility of key trial data and deviations of intervention from the trial protocol (with impact on patient safety or data validity);
  • errors in endpoint assessment; and
  • errors in SAE reporting.
  • Impact of the monitoring strategy on participant recruitment and follow‐up.
  • Effect of the monitoring strategy on resource use (costs).
  • Qualitative research data or process evaluations of the monitoring interventions.

Search methods for identification of studies

Electronic searches.

We conducted a comprehensive search (May 2019) using a search strategy that we developed together with an experienced scientific information specialist (HE). We systematically searched the Cochrane Central Register of Controlled Trials (CENTRAL), PubMed, and Embase via Elsevier for relevant published literature (PubMed strategy shown below, all searches in full in the  Appendix 1 ). The search strategy for all three databases was peer‐reviewed according to PRESS guidelines ( McGowan 2016 ) by the Cochrane information specialist, Irma Klerings (Cochrane Austria). We also searched the online SWAT repository (go.qub.ac.uk/SWAT-SWAR). We applied no restrictions regarding language or date of publication. Since our original search for the review took place in May 2019, we performed an updated search in March 2021 to ensure that we included all eligible studies up to that date. Our updated search identified no additional eligible studies.

We used the following terms to identify prospective studies that compared different strategies for trial monitoring:

  • triggered monitoring;
  • targeted monitoring;
  • risk‐adapted monitoring;
  • risk adapted monitoring;
  • risk‐based monitoring;
  • risk based monitoring;
  • centralized monitoring;
  • centralised monitoring;
  • statistical monitoring;
  • on site monitoring;
  • on‐site monitoring;
  • monitoring strategy;
  • monitoring method;
  • monitoring technique;
  • trial monitoring; and
  • central monitoring.

The search was intended to identify randomized trials and non‐randomized intervention studies that evaluated monitoring strategies in a prospective setting. Therefore, we modified the Cochrane sensitivity‐maximizing filter for randomized trials ( Lefebvre 2011 ).

PubMed search strategy:

(“on site monitoring”[tiab] OR “on‐site monitoring”[tiab] OR “monitoring strategy”[tiab] OR “monitoring method”[tiab] OR “monitoring technique”[tiab] OR ”triggered monitoring”[tiab] OR “targeted monitoring”[tiab] OR “risk‐adapted monitoring”[tiab] OR “risk adapted monitoring”[tiab] OR “risk‐based monitoring”[tiab] OR “risk based monitoring”[tiab] OR “risk proportionate”[tiab] OR “centralized monitoring”[tiab] OR “centralised monitoring”[tiab] OR “statistical monitoring”[tiab] OR “central monitoring”[tiab]) AND (“prospective” [tiab] OR “prospectively” [tiab] OR randomized controlled trial [pt] OR controlled clinical trial [pt] OR randomized [tiab] OR placebo [tiab] OR drug therapy [sh] OR randomly [tiab] OR trial [tiab] OR groups [tiab]) NOT (animals [mh] NOT humans[mh])

Searching other resources

We handsearched reference lists of included studies and similar systematic reviews to find additional relevant study articles ( Horsley 2011 ). In addition, we searched the grey literature ( Appendix 2 ) (i.e. conference proceedings of the Society for Clinical Trials and the International Clinical Trials Methodology Conference), and trial registries (ClinicalTrials.gov, the World Health Organization International Clinical Trials Registry Platform, the European Union Drug Regulating Authorities Clinical Trials Database, and ISRCTN) for ongoing or unpublished prospective studies. Finally, we collaborated closely with researchers of already identified eligible studies (e.g. OPTIMON, ADAMON, INSIGHT START, and MONITORING) and contacted researchers to identify further studies (and unpublished data, if available).

Data collection and analysis methods were based on the recommendations described in the Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2020 ) and Methodological Expectations for the Conduct of Cochrane Intervention Reviews ( Higgins 2016 ).

Selection of studies

After elimination of duplicate records, two review authors (KK and PA) independently screened titles and abstracts for eligibility. We retrieved potentially relevant studies as full‐text reports and two review authors (KK and MB) independently assessed these for eligibility, applying prespecified criteria (see:  Criteria for considering studies for this review ). We resolved any disagreements between review authors by discussion until consensus was reached, or by involving a third review author (CPM). We documented the study selection process in a flow diagram, as described in the PRISMA statement ( Moher 2009 ).

Data extraction and management

For each eligible study, two review authors (KK and MMB) independently extracted information on a number of key characteristics, using electronic data collection forms ( Appendix 3 ). Data were extracted in Epi‐Reviewer 4 ( Thomas 2010 ). We resolved any disagreements by discussion until consensus was reached, or by involving a third review author (MB). We contacted authors of included studies directly when target information was unreported or unclear to clarify or complete extracted data. We summarized the data qualitatively and quantitatively (where possible) in the  Results  section, below. If meta‐analysis of the primary or secondary outcomes was not applicable due to considerable methodological heterogeneity between studies, we reported the results qualitatively only.

Extracted study characteristics included the following.

  • General information about the study: title, authors, year of publication, language, country, funding sources.
  • Methods: study design, allocation method, study duration, stratification of sites (stratified on risk level, country, projected enrolment, etc.).
  • design (randomized or other prospective intervention trial);
  • setting (primary care, tertiary care, community, etc.);
  • national or multinational;
  • study population;
  • total number of sites randomized/analyzed;
  • inclusion/exclusion criteria;
  • IMP risk category;
  • support from clinical trials unit (CTU) or clinical research organization for host trial or evidence for experienced research team; and
  • trial phase.
  • number of sites randomized/allocated to groups (specifying number of sites or clusters);
  • duration of intervention period;
  • risk assessment characteristics (follow‐up questions)/triggers or thresholds that induce on‐site monitoring (follow‐up questions);
  • frequency of monitoring visits;
  • extent of on‐site monitoring;
  • frequency of central monitoring reports;
  • number of monitoring visits per participant;
  • cumulative monitoring time on‐site;
  • mean number of monitoring visits per site;
  • delivery (procedures used for central monitoring: structure/components of on‐site monitoring/triggers/thresholds);
  • who performed the monitoring (study team, trial staff; qualifications of monitors);
  • degree of SDV (median number of participants undergoing SDV); and
  • co‐interventions (site/study‐specific co‐interventions).
  • Outcomes: primary and secondary outcomes, individual components of combined primary outcome, outcome measures and scales, time points of measurement, statistical analysis of outcome data.
  • Data to assess the risk of bias of included studies (e.g. random sequence generation, allocation concealment, blinding of outcome assessors, performance bias, selective reporting, or other sources of bias).

Assessment of risk of bias in included studies

Two review authors (KK and MMB) independently assessed the risk of bias in each included study using the criteria described in the Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2020 ) and the Cochrane EPOC Review Group ( EPOC 2017 ). The domains provided by these criteria were evaluated for all included randomized studies and assigned ratings of low, high, or unclear risk of bias. We assessed non‐randomized studies using the ROBINS‐I tool of bias assessment for non‐randomized studies separately ( Higgins 2020 , Chapter 25).

We assessed the risk of bias for randomized studies as follows.

Selection bias

Generation of the allocation sequence.

  • If sequence generation was truly random (e.g. computer generated): low risk.
  • If sequence generation was not specified and we were unable to obtain relevant information from study authors: unclear risk.
  • If there was a quasi‐random sequence generation (e.g. alternation): high risk.
  • Non‐randomized trials: high risk.

Concealment of the allocation sequence (steps taken prior to the assignment of intervention to ensure that knowledge of the allocation was not possible)

  • If opaque, sequentially numbered envelopes were used or central randomization was performed by a third party: low risk.
  • If the allocation concealment was not specified and we were unable to ascertain whether the allocation concealment had been protected before and until assignment: unclear risk.
  • Non‐randomized trials and studies that used inadequate allocation concealment: high risk.

For non‐randomized studies, we further assessed if investigators attempted to balance groups by design (control for selection bias) and attempted to control for confounding: high risk according to Cochrane risk of bias tool, but we considered the risk of bias control efforts in our judgment of the certainty of the evidence according to GRADE.

Performance bias

It is not practicable to blind participating sites and monitors to the intervention to which they were assigned because of the procedural differences of monitoring strategies.

Detection bias (blinding of the outcome assessor)

  • If the assessors performing audits had knowledge of the intervention and thus outcomes were not assessed blindly: high risk.
  • If we could not ascertain whether assessors were blinded and study authors did not provide information to clarify: unclear risk.
  • If outcomes were assessed blindly: low risk.

Attrition bias

We did not expect to have missing data for our primary outcome (i.e. the rates of serious/critical audit findings at the end of the host clinical trials; and because missing participants were not audited, missing data in the proportion of critical findings were not expected). However, for the statistical power of the individual study outcomes, missing data for participants and site accrual could be an issue and is discussed below ( Discussion ).

Selective reporting bias

We investigated whether all outcomes mentioned in available study protocols, registry entries, or methodology sections of study publications were reported in results sections.

  • If all outcomes in the methodology or outcomes specified in the study protocol were not reported in the results, or if outcomes reported in the results were not listed in the methodology or in the protocol: high risk.
  • If outcomes were only partly reported in the results, or if an obvious outcome was not mentioned in the study: high risk.
  • If information is unavailable on the prespecified outcomes and the study protocol: unclear risk.
  • If all outcomes were listed in the protocol/methodology section and reported in the results: low risk.

Other potential sources of bias

  • If there was one or more important risk of bias (e.g. flawed study design): high risk .
  • If there was incomplete information regarding a problem that may have led to bias: unclear risk .
  • If there was no evidence of other sources of bias: low risk .

We assessed the risk of bias for non‐randomized studies as follows.

Pre‐intervention domains

  • Confounding – baseline confounding occurs when one or more prognostic variables (factors that predict the outcome of interest) also predict the intervention received at baseline.
  • Selection bias (bias in selection of participants into the study) – when exclusion of some eligible participants, or the initial follow‐up time of some participants, or some outcome events, is related to both intervention and outcome, there will be an association between interventions and outcome even if the effect of interest is truly null.

At‐intervention domain

  • Information bias – bias in classification of interventions, i.e. bias introduced by either differential or non‐differential misclassification of intervention status.

Postintervention domains

  • Confounding – bias that arises when there are systematic differences between experimental intervention and comparator groups in the care provided, which represent a deviation from the intended intervention(s).
  • Selection bias – bias due to exclusion of participants with missing information about intervention status or other variables such as confounders.
  • Information bias – bias introduced by either differential or non‐differential errors in measurement of outcome data.
  • Reporting bias – bias in selection of the reported result.

Measures of the effect of the methods

We conducted a comparative analysis of the impact of different risk‐based monitoring strategies on data quality and patient rights and safety measures, for example by the proportion of critical findings.

If meta‐analysis was appropriate, we analyzed dichotomous data using a risk ratio with a 95% confidence interval (CI). We analyzed continuous data using mean differences with a 95% CI if the measurement scale was the same. If the scale was different, we used standardized mean differences with 95% CIs.

Unit of analysis issues

Included studies could differ in outcomes chosen to assess the effects of the respective monitoring strategy. Critical/serious audit findings could be reported on a participant level, per finding event, or per site. Furthermore, components of the primary endpoints could vary between studies. We specified the study outcomes as defined in the study protocols or reports, and only meta‐analyzed outcomes that were based on similar definitions. In addition, we compared individual components of the primary outcome if these were consistently defined across studies (e.g. eligibility violations).

Cluster randomized trials have been highlighted separately to individually randomized trials. We reported the baseline comparability of clusters and considered statistical adjustment to reduce any potential imbalance. We estimated the intracluster correlation coefficient (ICC), as described by  Higgins 2020 , using information from the study (if available) or from an external estimate from a similar study. We then conducted sensitivity analyses to explain variation in ICC values.

Dealing with missing data

We contacted authors of included studies in an attempt to obtain unpublished data or additional information of value for this review ( Young 2011 ). Where a study had been registered and a relevant outcome was specified in the study protocol but no results were reported, we contacted the authors and sponsors to request study reports. We created a table to summarize the results for each outcome. We narratively explored the potential impact of missing data in our  Discussion .

Assessment of heterogeneity

When we identified methodological heterogeneity, we did not pool results in a meta‐analysis. Instead, we qualitatively synthesized results by grouping studies with similar designs and interventions, and described existing methodological heterogeneity (e.g. use of different methods to assess outcomes). If study characteristics, methodology, and outcomes were sufficiently similar across studies, we quantitatively pooled results in a meta‐analysis and assessed heterogeneity by visually inspecting forest plots of included studies (location of point estimates and the degree to which CIs overlapped), and by considering the results of the Chi 2 test for heterogeneity and the I 2 statistic. We followed the guidance outlined in  Higgins 2020  to quantify statistical heterogeneity using the I 2 statistic:

  • 0% to 40% might not be important;
  • 30% to 60% may represent moderate heterogeneity;
  • 50% to 90% may represent substantial heterogeneity;
  • 75% to 100%: considerable heterogeneity.

The importance of the observed value of the I 2 statistic depends on the magnitude and direction of effects, and the strength of evidence for heterogeneity (e.g. P value from the Chi 2 test, or a credibility interval for the I 2 statistic). If our I 2 value indicated that heterogeneity was a possibility and either the Tau 2 was greater than zero, or the P value for the Chi 2 test was low (less than 0.10), heterogeneity may have been due to a factor other than chance.

Possible sources of heterogeneity from the characteristics of host trials included:

  • trial phase;
  • support from a CTU or clinical research organization for host trial or evidence for an experienced research team; and
  • study population.

Possible sources of heterogeneity from the characteristics of methodology studies included:

  • study design;
  • components of outcome;
  • method of outcome assessment;
  • level of outcome (participant/site); and
  • classification of monitoring findings.

Due to high heterogeneity of studies, we used the random‐effects method ( DerSimonian 1986 ), which incorporates an assumption that the different studies are estimating different, yet related, intervention effects. As described in Section 9.4.3.1 of the Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2020 ), the method is based on the inverse‐variance approach, making an adjustment to the study weights according to the extent of variation, or heterogeneity, among the varying intervention effects. Due to the small number of studies included into the meta‐analyses and the high heterogeneity of the studies in the number of participants or sites included in the analysis we decided to use the inverse variance method. The inverse variance estimates the amount of variation across studies by comparing each study's result with an inverse‐variance fixed‐effect meta‐analysis result. This resulted in a more appropriate weighting of the included studies according to the extent of variation.   

Assessment of reporting biases

To decrease the risk of publication bias affecting the findings of the review, we applied various search approaches using different resources. These included grey literature searching and checking reference lists (see  Search methods for identification of studies ). If 10 or more studies were available for a meta‐analysis, we would have created a funnel plot to investigate whether reporting bias may have existed unless all studies were of a similar size. If we noticed asymmetry, we would not have been able to conclude that reporting biases existed, but we would have considered the sample sizes and presence (and possible influence) of outliers and discussed potential explanations, such as publication bias or poor methodological quality of included studies, and performed sensitivity analyses.

Data synthesis

Data were synthesized using tables to compare different monitoring strategies. We also reported results by different study designs. This was accompanied by a descriptive summary in the  Results  . We used Review Manager 5 to conduct our statistical analysis and undertake meta‐analysis, where appropriate ( Review Manager 2014 ).

If meta‐analysis of the primary or secondary outcomes was not possible, we reported the results qualitatively.

Two review authors (KK and MB) assessed the quality of the evidence. Based on the methods described in the Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2020 ) and GRADE ( Guyatt 2013a ;  Guyatt 2013b ), we created summary of findings tables for the main comparisons of the review. We presented all primary and secondary outcomes outlined in the  Types of outcome measures  section. We described the study settings and number of sites addressing each outcome. For each assumed risk of bias cited, we provided a source and rationale, and we implemented the GRADE system to assess the quality of the evidence using GRADEpro GDT software or the GRADEpro GDT app ( GRADEpro GDT ). If meta‐analysis was not appropriate or the units of analysis could not be compared, we presented results in a narrative summary of findings table. In this case, the imprecision of the evidence was an issue of concern due to the lack of a quantitative effect measure.

Subgroup analysis and investigation of heterogeneity

If visual inspection of the forest plots, Chi 2 test, I 2 statistic, and Tau 2 statistic indicated that statistical heterogeneity might be present, we carried out exploratory subgroup analysis. A subgroup analysis was deemed appropriate if the included studies satisfied criteria assessing the credibility of subgroup analyses ( Oxman 1992 ;  Sun 2010 ).

The following was our a priori subgroup: monitoring strategies using very similar approaches and consistent outcomes.   

Sensitivity analysis

We conducted sensitivity analyses restricted to:

  • peer‐reviewed and published studies only (i.e. excluding unpublished studies); and
  • studies at low risk of bias only (i.e. excluding non‐randomized studies and randomized trials without allocation concealment;  Assessment of risk of bias in included studies ).

Description of studies

See: Characteristics of included studies and Characteristics of excluded studies tables.

Results of the search

See  Figure 1  (flow diagram).

An external file that holds a picture, illustration, etc.
Object name is nMR000051-FIG-01.jpg

Study flow diagram.

Our search of CENTRAL, PubMed, and Embase resulted in 3105 unique citations, 3103 citations after removal of duplicates and two additional citations that were identified through reference lists of relevant articles. After screening titles and abstracts, we sought the full texts of 51 records to confirm inclusion or clarify uncertainties regarding eligibility. Eight studies (14 articles) were eligible for inclusion. The results of six of these were published as full papers ( Brosteanu 2017b ;  Fougerou‐Leurent 2019 ;  Liènard 2006 ;  Mealer 2013 ;  Stenning 2018b ;  Wyman 2020 ), one study was published as an abstract only ( Knott 2015 ), and one study was submitted for publication ( Journot 2017 ). We did not identify any ongoing eligible studies or studies awaiting classification.

Included studies

Seven of the eight included studies were government or charity funded. The other was industry funded ( Liènard 2006  ). The primary objectives were heterogeneous and included non‐inferiority evaluations of overall monitoring performance as well as single elements of monitoring (SDV, initiation visit); see  Characteristics of included studies  table and  Table 7 .

ARDS network: Acute Respiratory Distress Syndrome network; ART: antiretroviral therapy; ChiLDReN: Childhood Liver Disease Research Network; CRF: case report form; CTU: clinical trials unit; GCP: good clinical practice; IQR: interquartile range; min: minute; MRC: Medical Research Council; SAE: serious adverse event; SD: standard deviation; SDV: source data verification.

Overall, there were five groups of comparisons:

  • risk‐based monitoring guided by an initial risk assessment and information from central monitoring during study conduct versus extensive on‐site monitoring (ADAMON:  Brosteanu 2017b ; OPTIMON:  Journot 2017 );
  • central monitoring with triggered on‐site visits versus regular (untriggered) on‐site visits ( Knott 2015 ; TEMPER:  Stenning 2018b );
  • central statistical monitoring and local monitoring at sites with annual on‐site visits (untriggered) versus central statistical monitoring and local monitoring at sites only (START‐MV:  Wyman 2020 );
  • 100% on‐site SDV versus remote SDV ( Mealer 2013 ) or targeted SDV (MONITORING:  Fougerou‐Leurent 2019 ); and
  • on‐site initiation visit versus no on‐site initiation visit ( Liènard 2006 ).

Since there was substantial heterogeneity in the investigated monitoring strategies and applied study designs, a short overview of each included study is provided below.

General characteristics of individual included studies

1. risk‐based versus extensive on‐site monitoring.

The ADAMON study was a cluster randomized non‐inferiority trial comparing risk‐adapted monitoring with extensive on‐site monitoring at 213 sites participating in 11 international and national clinical trials (all in secondary or tertiary care and with adults and children as participants) ( Brosteanu 2017b ). It included only randomized, multicenter clinical trials (at least six trial sites) with a non‐commercial sponsor and had standard operating procedures (SOPs) for data management and trial supervision as well as central monitoring of at least basic extent. The prior risk analysis categorized trials into two of three different risk categories and trials were monitored according to a prespecified monitoring plan for their respective risk category. While the RAMP for the highest risk category was only marginally less extensive than full on‐site monitoring, risk‐based monitoring strategies for the lower risk categories relied on information from central monitoring and previous visits to determine the amount of on‐site monitoring. This resulted in a marked reduction of on‐site monitoring for sites without noticeable problems, limited to key data monitoring (20% to 50%). Only studies that had been classified as either intermediate risk or low risk based on the trial‐specific risk analysis ( Brosteanu 2009 ) were included in the study. From the 11 clinical trials, 156 sites were audited by ADAMON‐trained auditors and included in the final analysis. The analysis included a meta‐analysis of results obtained within each trial.

The OPTIMON study was a cluster randomized non‐inferiority trial evaluating a risk‐based monitoring strategy within 22 national and international multicenter studies ( Journot 2017 ). The 22 trials included 15 randomized trials, four cohort studies, and three cross‐sectional studies in the secondary care setting with adults, children, and older people as participants. All trials involved methodology and management centers or CTUs, had at least two years of experience in multicenter clinical research studies, and SOPs in place. A total of 83 sites were randomized to one of two different monitoring strategies. The risk‐based monitoring approach consisted of an initial risk assessment with four outcome levels (low, moderate, substantial, and high) and a standardized monitoring plan, where on‐site monitoring increased with the risk level of the trial ( Journot 2011 ). The study aimed to assess whether such a risk‐adapted monitoring strategy provided results similar to those of the 100% on‐site strategy on the main study quality criteria, and, at the same time, improved other aspects such as timeliness and costs ( Journot 2017 ). Only 759 participants from 68 sites were included in the final analysis, because of insufficient recruitment at 15 of the 83 randomized sites. The difference between strategies was evaluated by the proportion of participants without remaining major non‐conformities in all of the four assessed error domains (consent violation, SAE reporting violation, eligibility violation, and errors in primary endpoint assessment) assessed after trial monitoring by the OPTIMON team. The overall comparison of strategies was estimated using a generalized estimating equation (GEE) model, adjusted for risk level and intra‐site, intra‐patient correlation common to all sites.

2. Central monitoring with triggered on‐site visits versus regular (untriggered) on‐site visits

Knott 2015  was a monitoring study embedded in a large international multicenter trial evaluating the ability of central statistical monitoring procedures to identify sites with problems. Monitoring findings at sites during on‐site monitoring visits targeted as a result of central statistical monitoring procedures were compared to monitoring findings at sites chosen by regional co‐ordinating centers. Oversight of the clinical multicenter trial was supported by central statistical monitoring that identified high scoring sites as priority for further investigation and triggered a targeted on‐site visit. In order to compare targeted on‐site visits with regular on‐site visits, high scoring sites, and some low scoring sites in the same countries identified by the country teams as potentially problematic were visited. The decision about which of the low scoring sites would benefit most from an on‐site visit was based on prior experience of the regional co‐ordinating centers with the site. Twenty‐one sites (12 identified by central statistical monitoring, nine others as comparators) received a comprehensive monitoring visit from a senior monitor and the number of major and minor findings were compared between the two types of visits (targeted versus regular visit).

The TEMPER study ( Stenning 2018b ) was conducted in three ongoing phase III randomized multicenter oncology trials with 156 UK sites ( Diaz‐Montana 2019a ). All three included trials were in secondary care settings, were conducted and monitored by the MRC CTU at University College London, and were sponsored by the UK MRC and employed a triggered monitoring strategy. The study used a matched‐pair design to assess the ability of targeted monitoring to distinguish sites at which higher and lower rates of protocol or GCP violations (or both) would be found during site visits. The targeted monitoring strategy was based on trial data that were scrutinized centrally with prespecified triggers provoking an on‐site visit when certain thresholds had been crossed. In order to compare this approach to standard on‐site monitoring, a matching algorithm proposed untriggered sites to visit by minimizing differences in 1. number of participants and 2. time since first participant randomized, and by maximizing differences in trigger score. Monitoring data from 42 matched paired visits (84 visits) at 63 sites were included in the analysis of the TEMPER study. The monitoring strategy was assessed over all trial phases and the outcome was assessed by comparing the proportion of sites with one or more major or critical finding not already identified through central monitoring or a previous visit ('new' findings). The prognostic value of individual triggers was also assessed.

3. Central and local monitoring with annual on‐site visits versus central and local monitoring only

The START Monitoring Substudy was conducted within one large international, publicly funded randomized clinical trial (START – Strategic Timing of AntiRetroviral Treatment) ( Wyman 2020 ). The monitoring substudy included 4371 adults from 196 secondary care sites in 34 countries. All clinical sites were associated with one of four INSIGHT co‐ordinating centers and central monitoring by the statistical center was done continuously using central databases. In addition, local monitoring of regulatory files, SDV, and study drug management was performed by site staff semi‐annually. In the monitoring substudy, sites were randomized to receive annual on‐site monitoring in addition to central and local monitoring or to central and local monitoring alone. The composite monitoring outcome consisted of eligibility violations, informed consent violations, intervention (use of antiretroviral therapy as initial treatment not permitted by protocol), primary endpoint and SAE reporting. In the analysis, a generalized estimation equation model with fixed effects to account for clustering was used and each component of the composite outcome was evaluated to interpret the relevance of the overall composite result.

4. Traditional 100% source data verification versus remote or targeted source data verification

Mealer 2013  was a pilot study on remote SDV in two national clinical trials' networks in which study participants were randomized to either remote SDV followed by on‐site verification or traditional on‐site SDV. Thirty‐two participants in randomized and other prospective clinical intervention trials within the adult trials network and the pediatric network were included in this monitoring study. A sample of participants in this secondary and tertiary care setting, who were due for an upcoming monitoring visit that included full SDV were randomized and stratified at each individual hospital. The five study sites had different health information technology infrastructures, resulting in different approaches to enable remote access and remote data monitoring. Only participants randomized to remote SDV had a previsit remote SDV performed prior to full SDV at the scheduled visit. Remote SDV was performed by validating the data elements captured on CRFs submitted to the co‐ordinating center using the same data verification protocols that were used during on‐site visits and remote monitors had telephone access to the local co‐ordinators. The primary outcome was the proportion of data values identified versus not identified for both monitoring strategies. As an additional economic outcome, the total time required for the study monitor to verify a case report item with either remote or on‐site monitoring form was analyzed.

The MONITORING study was a prospective cross‐over study comparing full SDV, where 100% of data was verified for all participants, and targeted SDV, where only key data were verified for all participants ( Fougerou‐Leurent 2019 ). Data from 126 participants from one multinational and five national clinical trials managed by the Clinical Investigation Center at the Rennes University Hospital INSERM in France were included in the analysis. These studies included five randomized trials and one non‐comparative pilot single‐center phase II study taking place in either tertiary or secondary care units. Key data verified by the targeted SDV included informed consent, inclusion and exclusion criteria, main prognostic variables at inclusion, primary endpoint, and SAEs. The same CRFs were analyzed with full or targeted SDV. SDV of both strategies was followed by the same data‐management program, detecting missing data and checking consistency, on final data quality, global workload, and staffing costs. Databases of full SDV and targeted SDV after the data‐management process were compared and identified discrepancies were considered as remaining errors with targeted monitoring.

5. Systematic on‐site initiation visit versus on‐site initiation visit upon request

Liènard 2006  was a monitoring study within a large international randomized trial of cancer treatment. A total of 573 participants from 135 centers in France were randomized on a center level to receive an on‐site initiation visit for the study or no initiation visit. Although the study was terminated early, 68 secondary care centers, stratified by center type (private versus public hospital), had entered at least one participant into the study. The study was terminated because the sponsor decided to redirect on‐site monitoring visits to centers in which a problem had been identified. The aim of this monitoring study was to assess the impact of on‐site initiation visits on the following outcomes: participant recruitment, quantity and quality of data submitted to the trial co‐ordinating office, and participants' follow‐up time. On‐site initiation visits by monitors included review of the protocol, inclusion and exclusion criteria, safety issues, randomization procedure, CRF completion, study planning, and drug management. Investigators requesting on‐site visits were visited regardless of the allocated randomized group and results were analyzed by randomized group.

Characteristics of the monitoring strategies

There was substantial heterogeneity in the characteristics of the evaluated monitoring strategies.  Table 7  summarizes the main components of the evaluated strategies.

Central monitoring components within the monitoring strategies

Use of central monitoring to trigger/adjust on‐site monitoring.

Central monitoring plays an important role in the implementation of risk‐based monitoring strategies. An evaluation of site performance through continuous analysis of data quality can be used to direct on‐site monitoring to specific sites or support remote monitoring methods. A reduction in on‐site monitoring for certain trials was accompanied by central monitoring which also enabled additional on‐site interference in cases of low‐quality performance related to data quality, completeness, or patient rights and safety of specific sites. Six included studies used central monitoring methods to support their new monitoring strategy (ADAMON:  Brosteanu 2017b ; OPTIMON:  Journot 2017 ;  Knott 2015 ;  Mealer 2013 ; TEMPER:  Stenning 2018b ; START Monitoring Substudy:  Wyman 2020 ). Four of these studies used central monitoring information to trigger or delegate on‐site monitoring. In the ADAMON study, part of the monitoring plan for the lower‐ and medium‐risk studies comprised a regular assessment of the trial sites as 'with' or 'without noticeable problems' ( Brosteanu 2017b ). Classification as a site 'with noticeable problems' resulted in an increased number of on‐site visits per year. In the OPTIMON study, major problems (patient rights and safety, quality of results, regulatory aspects) triggered an additional on‐site visit for level B and C sites, or a first on‐site visit for level A sites ( Journot 2017 ). All entered data were checked for completeness and consistency for all participants for all sites ( OPTIMON study protocol 2008 ). The TEMPER study evaluated prespecified triggers for all sites in order to direct on‐site visits to sites with a high trigger score ( Stenning 2018b ). A trigger data report based on database exports was generated and used in the trigger meeting to guide the prioritization of triggered sites. Triggers were 'fired' when an inequality rule that reflected a certain threshold of data non‐conformities was evaluated as 'true'. Each trigger had an associated weight specifying its importance relative to other triggers, resulting in a trigger score for each site that was evaluated in trigger meetings and guided the prioritization of on‐site visits ( Diaz‐Montana 2019a ). In  Knott 2015 , all sites of the multicenter international trial received central statistical monitoring that identified high scoring sites as priority for further investigation. Scoring was applied every six months and a subsequent meeting of the central statistical monitoring group, including the chief investigator, chief statistician, junior statistician, and head of trial monitoring, and assessed high scoring sites and discussed trigger adjustments. Fired triggers resulted in a score of one and high scoring sites were chosen for a monitoring visit in the triggered intervention group.

Use of central monitoring and remote monitoring to support on‐site monitoring

In the ADAMON study, central monitoring activities included statistical monitoring with multivariate analysis, structured telephone interviews, site status in terms of participant numbers (number of included participants, number lost to follow‐up, screening failures, etc.) ( Brosteanu 2017b ). In the OPTIMON study, computerized controls were made on data entered from all participants in all investigation sites to check their completeness and consistency ( Journot 2017 ). Following these controls, the clinical research associate sent the investigator requests for clarification or correction of any inconsistent data. Regular contact was maintained by telephone, fax, or e‐mail with the key people at the trial site to ensure that procedures were observed, and a report was compiled in the form of a standardized contact form.

Use of central monitoring without on‐site monitoring

In the START Monitoring Substudy, central monitoring was performed by the statistical center using data in the central database on a continuous basis ( Wyman 2020 ). Reports summarizing the reviewed data were provided to all sites and site investigators and were updated regularly (daily, weekly, or monthly). Sites and staff from the statistical center and co‐ordinating centers also reviewed data summarizing each site's performance every six months and provided quantitative feedback to clinical sites on study performance. These reviews focused on participant retention, data quality, timeliness, and completeness of START Monitoring Substudy endpoint documentation, and adherence to local monitoring requirements. In addition, trained nurses at the statistical center reviewed specific adverse events and unscheduled hospitalizations for possible misclassification of primary START clinical events. Tertiary data, for example, laboratory values, were also reviewed by central monitoring ( Hullsiek 2015 ).

Use of central monitoring for source data verification

In the  Mealer 2013  pilot study, remote SDV validated the data elements captured on CRFs submitted to the co‐ordinating center. Data collection instruments for capturing study variables were developed and remote access for the study monitor was set up to allow secure online access to electronic records. The same data verification protocols were used as during on‐site visits and remote monitors had telephone access to local co‐ordinators.

Initial risk assessment

An initial risk assessment of trials was performed in the ADAMON ( Brosteanu 2017b ) and OPTIMON ( Journot 2017 ) studies. The RAS used in the OPTIMON study was evaluated in the validity and reproducibility study, the Pre‐OPTIMON study, and was performed in three steps leading to four different risk categories that imply different monitoring plans. The first step related to the risk of the studied intervention in terms of product authorization, invasiveness of surgery technique, CE marking class, and invasiveness of other interventions, which led to a temporary classification in the second step. In the third step, the risk of mortality based on the procedures of the intervention and the vulnerability of the study population were additionally taken into consideration and may have led to an increase in risk level. The risk analysis used in the ADAMON study also had three steps. The first step involved an assessment of the risk associated with the therapeutic intervention compared to the standard of care. The second step was based on the presence of at least one of a list of risk indicators for the participant or the trial results. In the third step, the robustness of trial procedures (reliable and easy to assess primary endpoint, simple trial procedures) was evaluated. The risk analysis resulted in one of three risk categories entailing different basic on‐site monitoring measures in each of the three monitoring classes.

Excluded studies

We excluded 37 studies after full‐text screening ( Characteristics of excluded studies  table). We excluded articles for the following reasons: 21 studies did not compare different monitoring strategies and 16 were not prospective studies.   

Risk of bias in included studies

Risk of bias in the included studies is summarized in  Figure 2  and  Figure 3 . We assessed all studies for risk of bias following the criteria described in the Cochrane Handbook for Systematic Reviews of Interventions for randomized trials ( Higgins 2020 ). In addition, we used the ROBINS‐I tool for the three non‐randomized studies ( Fougerou‐Leurent 2019 ;  Knott 2015 ;  Stenning 2018b ; results shown in  Appendix 4 ).

An external file that holds a picture, illustration, etc.
Object name is tMR000051-FIG-02.jpg

Risk of bias graph: review authors' judgments about each risk of bias item presented as percentages across all included studies.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-FIG-03.jpg

Risk of bias summary: review authors' judgments about each risk of bias item for each included study.

Group allocation was at random and concealed in four of the eight studies with low risk of selection bias ( Brosteanu 2017b ;  Journot 2017 ;  Liènard 2006 ;  Wyman 2020 ). Three were non‐randomized studies; two evaluated triggered monitoring (matched comparator design), where randomization was not practicable due to the dynamic process of the monitoring intervention ( Knott 2015 ;  Stenning 2018b ), and the other used a prospective cross‐over design (the same CRFs were analyzed with full or targeted SDV) ( Fougerou‐Leurent 2019 ). Since we could not identify an increased risk of bias for the prospective cross‐over design (intervention applied on same participant data), we rated the study at low risk of selection bias. Although the original investigators attempted to balance groups and to control for confounding in the TEMPER study ( Stenning 2018b ), we rated the design at high risk of bias according to the criteria described in the Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2020 ). One study randomly assigned participant‐level data without any information about allocation concealment (unclear risk of bias) ( Mealer 2013 ).

In six studies, investigators, site staff, and data collectors of the trials were not informed about the monitoring strategy applied ( Brosteanu 2017b ;  Journot 2017 ;  Knott 2015 ;  Liènard 2006 ;  Stenning 2018b ;  Wyman 2020 ). However, blinding of monitors was not practicable in these six studies and thus we judged them at high risk of bias. In two studies, blinding of site staff was difficult because the interventions of monitoring involved active participation of trial staff (high risk of bias) ( Fougerou‐Leurent 2019 ;  Mealer 2013 ). It is unclear if the data management was blinded in these two studies.

Detection bias

Although monitoring could usually not be blinded due to the methodologic and procedural differences in the interventions, three studies performed a blinded outcome assessment (low risk of bias). In ADAMON, the audit teams verifying the monitoring outcomes of the two monitoring interventions were not informed of the sites' monitoring strategy and did not have access to any monitoring reports ( Brosteanu 2017b ). Audit findings were reviewed in a blinded manner by members of the ADAMON team and discussed with auditors, as necessary, to ensure that reporting was consistent with the ADAMON audit manuals ( ADAMON study protocol 2008 ). In OPTIMON, the main outcome was validated by a blinded validation committee ( Journot 2017 ). In TEMPER, the lack of blinding of monitoring staff was mitigated by consistent training on the trials and monitoring methods, the use of a common finding grading system, and independent review of all major and critical findings which was blind to visit type ( Stenning 2018b ). The other five studies provided no information on blinded outcome assessment or blinding of statistical center staff (unclear risk of bias) ( Fougerou‐Leurent 2019 ;  Knott 2015 ;  Liènard 2006 ;  Mealer 2013 ;  Wyman 2020 ).

Incomplete outcome data

All eight included studies were at low risk of attrition bias ( Brosteanu 2017b ;  Fougerou‐Leurent 2019 ;  Journot 2017 ;  Knott 2015 ;  Liènard 2006 ;  Mealer 2013 ;  Stenning 2018b ;  Wyman 2020 ). However, ADAMON reported that "… one site refused the audit, and in the last five audited trials, 29 sites with less than three patients were not audited due to limited resources, in large sites (>45 patients), only a centrally preselected random sample of patients was audited. Arms are not fully balanced in numbers of patients audited (755 extensive on‐site monitoring and 863 risk‐adapted monitoring) overall" ( Brosteanu 2017b ). Another study was terminated prematurely due to slow participant recruitment, but the number of centers that randomized participants were equal in both groups (low risk of bias) ( Liènard 2006 ).   

Selective reporting

A design publication was available for one study (START Monitoring Substudy [two publications]  Hullsiek 2015 ;  Wyman 2020 ) and three studies published a protocol (ADAMON:  Brosteanu 2017b ; OPTIMON:  Journot 2017 ; TEMPER:  Stenning 2018b ). Three of these studies reported on all outcomes described in the protocol or design paper in their publications ( Brosteanu 2017b ;  Stenning 2018b ;  Wyman 2020 ), and one study has not been published as a full report yet, but provided outcomes stated in the protocol in the available conference presentation ( Journot 2017 ). One study has only been published as an abstract to date ( Knott 2015 ), but results of the prespecified outcomes were communicated to us by the study authors. For the three remaining studies, there were no protocol or registry entries available but the outcomes listed in the methods sections of their publications were all reported in the results and discussion sections (MONITORING:  Fougerou‐Leurent 2019 ;  Liènard 2006 ;  Mealer 2013 ).

There was an additional potential source of bias for one study (MONITORING:  Fougerou‐Leurent 2019 ). If the clinical research assistant spotted false or missing non‐key data when checking key data, he or she may have corrected the non‐key data in the CRF. This potential bias may have led to an underestimate of the difference between the two monitoring strategies. The full SDV CRF was considered without errors.

Effect of methods

In order to summarize the results of the eight included studies, we grouped them according to their intervention comparisons and their outcomes.

Primary outcome

Combined outcome of critical and major monitoring findings.

Five studies, three randomized (ADAMON:  Brosteanu 2017b ; OPTIMON:  Journot 2017 ; START Monitoring Substudy:  Wyman 2020 ), and two matched pair (TEMPER:  Stenning 2018b ;  Knott 2015 ), reported a combined monitoring outcome with four to six underlying error domains (e.g. eligibility violations). The ADAMON and OPTIMON studies defined findings as protocol and GCP violations that were not corrected or identified by the randomized monitoring strategy. The START Monitoring Substudy directly compared findings identified by the randomized monitoring strategies without a subsequent evaluation of remaining findings not corrected by the monitoring intervention. The classification into different severities of findings comprised different categories in three included studies that had different denominations (non‐conformity/major non‐conformity [ Journot 2017 ], minor/major/critical [ Brosteanu 2017b ;  Stenning 2018b ]), but were consistent in the assessment of severity with regard to participant's rights and safety or to validity of study results. Only findings classified as major or critical (or both) were included in the primary comparison of monitoring strategies in the ADAMON and OPTIMON studies. The START Monitoring Substudy only assessed major violations, which constitutes the highest severity of findings with regard to participant's rights and safety or to validity of study results. All three of these studies defined monitoring findings for the most critical aspects in the domains for consent violations, eligibility violations, SAE reporting violations, and errors in endpoint assessment. Since the START Monitoring Substudy focused on only one trial, these descriptions of critical aspects are very trial specific compared to the broader range of critical aspects considered in ADAMON and OPTIMON with a combined monitoring outcome. Critical and major findings are defined according to the classification of GCP findings described in  EMA 2017 . For detailed information about the classification of monitoring findings in the included studies, see the Additional tables.

1. Risk‐based monitoring versus extensive on‐site monitoring

ADAMON and OPTIMON evaluated the primary outcome as the remaining combined major and critical findings not corrected by the randomized monitoring strategy. Pooling the results of ADAMON and OPTIMON for the proportion of trial participants with at least one major or critical outcome not corrected by the monitoring intervention resulted in a risk ratio of 1.03 with a 95% CI of 0.80 to 1.33 (below 1.0 would be in favor of the risk‐based strategy;  Analysis 1.1 ;  Figure 4 ). However, START Monitoring evaluated the primary outcome of combined major and critical findings as a direct comparison of monitoring findings during trial conduct and the comparison of monitoring strategies differed from the one assessed in ADAMON and OPTIMON. Therefore, we did not include START Monitoring in the pooled analysis, but reported its results separately below.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-FIG-04.jpg

Forest plot of comparison: 1 Risk‐based versus on‐site monitoring – combined primary outcome, outcome: 1.1 Combined outcome of critical and major monitoring findings.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-CMP-001.01.jpg

Comparison 1: Risk‐based versus on‐site monitoring – combined primary outcome, Outcome 1: Combined outcome of critical and major monitoring findings

In the ADAMON study, 59.2% of participants with any major finding not corrected by the randomized monitoring strategy was identified in the risk‐based monitoring intervention group compared to 64.2% of participants with any major finding in the 100% on‐site group ( Brosteanu 2017b ). The analysis of the composite monitoring outcome in the ADAMON study using a random‐effects model, estimated with logistic regression and with sites as random effects accounting for clustering, resulted in evidence of non‐inferiority (point estimates near zero on the logit scale and all two‐sided 95% CIs clearly excluding the prespecified tolerance limit) ( Brosteanu 2017a ).

The OPTIMON study reported the proportions of participants without major monitoring findings ( Journot 2017 ). When considering the proportions of participants with major monitoring findings, 40% of participants in the risk‐adapted monitoring intervention group had a monitoring outcome not identified by the randomized monitoring strategy compared to 34% in the 100% on‐site group. Analysis of the composite primary outcome via the GEE logistic model resulted in an estimated relative difference between strategies of 8% in favor of the 100% on‐site strategy. Since the upper one‐sided confidence limit of this difference was 22%, non‐inferiority with the set non‐inferiority margin of 11% could not be demonstrated.

Two studies used a matched comparator design ( Knott 2015 ;  Stenning 2018b ). In these new strategies, on‐site visits were triggered by the exceeding of prespecified trigger thresholds. The studies reported the number of triggered sites that had monitoring findings versus the number of control sites that had a monitoring finding.

We pooled these two studies for the primary combined outcome of major and critical monitoring findings including all error domains ( Analysis 3.1 ;  Figure 5 ) and also after excluding re‐consent for the TEMPER study ( Analysis 4.1 ;  Figure 6 ). Excluding the error domain "re‐consent" gave a risk ratio of 2.04 (95% CI 0.77 to 5.38) in favor of the triggered monitoring while including re‐consent findings gave a risk ratio of 1.83 (95% CI 0.51 to 6.55) in favor of the triggered monitoring intervention. These results provide some evidence that the trigger process was effective in guiding on‐site monitoring but the differences were not statistically significant.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-FIG-05.jpg

Forest plot of comparison: 3 Triggered versus untriggered on‐site monitoring, outcome: 3.1 Sites one or more major monitoring finding combined outcome.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-FIG-06.jpg

Forest plot of comparison: 4 Sensitivity analysis of the comparison: triggered versus untriggered on‐site monitoring (sensitivity outcome TEMPER), outcome: 4.1 Sites one or more major monitoring finding excluding re‐consent.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-CMP-003.01.jpg

Comparison 3: Triggered versus untriggered on‐site monitoring, Outcome 1: Sites ≥ 1 major monitoring finding combined outcome

An external file that holds a picture, illustration, etc.
Object name is tMR000051-CMP-004.01.jpg

Comparison 4: Sensitivity analysis of the comparison: triggered versus untriggered on‐site monitoring (sensitivity outcome TEMPER), Outcome 1: Sites ≥ 1 major monitoring finding excluding re‐consent

In the study conducted by Knott and colleagues, 21 sites (12 identified by central statistical monitoring, nine others as comparators) received an on‐site visit and 11 of 12 identified by central statistical monitoring had one or more major or critical monitoring finding (92%), while only two of nine comparator sites (22%) had a monitoring finding ( Knott 2015 ). Therefore, the difference in proportions of sites with at least one major or critical monitoring finding was 70%. Minor findings indicative of 'sloppy practice' were identified at 10 of 12 sites in the triggered group and in two of nine in the comparator group. At one site identified by central statistical monitoring, there were serious findings indicative of an underperforming site. These results suggest that information from central statistical monitoring can help focus the nature of on‐site visits and any interventions required to improve site quality.

The TEMPER study identified 37 of 42 (88.1%) triggered sites with one or more major or critical finding not already identified through central monitoring or a previous visit and 34 of 42 (81.0%) matched untriggered sites with one of more major or critical finding (difference 7.1%, 95% CI –8.3% to 22.5%; P = 0.365) ( Stenning 2018b ). More than 70% of on‐site findings related to issues in recording informed consent, and 70% of these to re‐consent. The prespecified sensitivity analysis excluding re‐consent findings demonstrated a clear difference in event rate. When excluding re‐consent findings, the numbers reduced to 85.7% for triggered sites and 59.5% for untriggered sites (difference 26.2%, 95% CI 8.0% to 44.4%; P = 0.007). Thus, triggered monitoring in the TEMPER study did not satisfactorily distinguish sites with higher and lower levels of concerning on‐site monitoring findings. However, the prespecified sensitivity analysis excluding re‐consent findings demonstrated a clear difference in event rate. There was greater consistency between trials in the sensitivity and secondary analyses. In addition, there was some evidence that the trigger process used could identify sites at increased risk of serious concern: around twice as many triggered visits had one or more critical finding in the primary and sensitivity analyses.

The START Monitoring study ( Wyman 2020 ), with 196 sites in a single large international trial, reported a higher proportion of participants with a monitoring finding detected in the on‐site monitoring group (6.4%) compared to the group with only central and local monitoring (3.8%), resulting in an odds ratio (OR) of 1.7 (95% CI 1.1 to 2.7; P = 0.03) ( Wyman Engen 2020 ). However, it is not clearly reported if the findings within the groups were identified on‐site (on‐site visit or local monitoring) or by central monitoring and it was not verified whether central monitoring and local monitoring alone were unable to detect any violations or discrepancies within sites randomized to the intervention group. In addition, relatively few monitoring findings that would have impacted START results were identified by on‐site monitoring (no findings of participants who were inadequately consented, no findings of data alteration or fraud).

The two studies of targeted (MONITORING:  Fougerou‐Leurent 2019 ) and remote ( Mealer 2013 ) SDV reported findings only related to source documents. Different components of source data were assessed including consent verification as well as key data, but findings were reported only as a combined outcome. Minimal relative differences of parameters assessing the effectiveness of these methods in comparison to full SDV were identified in both studies. Both studies only assessed the SDV as the process of double checking that the same piece of information was written in the study database as well as in source documents. Processes, often referred to as Source Data Review, that confirm that the trial conduct complies with the protocol and GCP and ensure that appropriate regulatory requirements have been followed, are not included as study outcomes.

In the prospective cross‐over MONITORING study, comparing the databases of full SDV and target SDV, after the data management process, identified an overall error rate of 1.47% (95% CI 1.41% to 1.53%) and an error rate of 0.78% (95% CI 0.65% to 0.91%) on key data ( Fougerou‐Leurent 2019 ). The majority of these discrepancies, considered as the remaining errors with targeted monitoring, were observed on baseline prognostic variables. The researchers further assessed the impact of the two different monitoring strategies on data‐management workload. While the overall number of queries was larger with the targeted SDV, there was no statistical difference for the queries related to key data (13 [standard deviation (SD) 16] versus 5 [SD 6]; P = 0.15) and targeted SDV generated fewer corrections on key data in the data‐management process step. Considering the increased workload for data management at least in the early setup phase of a targeted SDV strategy, monitoring and data management should potentially be viewed as a whole in terms of efficacy . 

The pilot study conducted by Mealer and colleagues assessed the feasibility of remote SDV in two clinical trial networks ( Mealer 2013 ). The accuracy and completeness of remote versus on‐site SDV was determined by analyzing the number of data values that were either identical or different in the source data, missing or unknown after remote SDV reconciliated to all data values identified via subsequent on‐site monitoring. The percentage of data values that could either not be identified or were missed via remote access were compared to direct on‐site monitoring in another group of participants. In the adult network, only 0.47% (95% CI 0.03% to 0.79%) of all data values assigned to monitoring could not be correctly identified via remote monitoring and in the ChiLDReN network, all data values were correctly identified. In comparison, three data values could not be identified in the only on‐site group (0.13%, 95% CI 0.03% to 0.37%). In summary, 99.5% of all data values were correctly identified via remote monitoring. Information on the difference in monitoring findings during the two SDV methods was not reported in the publication. The study showed that remote SDV was feasible despite marked differences in remote access and remote chart review policies and technologies.

5. On‐site initiation visit versus no on‐site initiation visit

There were no data on critical and major findings in  Liènard 2006 .

Individual components of the primary outcome

Individual components of the primary outcome considered in the included studies were:

In the ADAMON study, there was non‐inferiority for all of the five error domain components of the combined primary outcome: informed consent process, patient eligibility, intervention, endpoint assessment, and SAE reporting ( Brosteanu 2017a ). In the OPTIMON study, the biggest difference between monitoring strategies was observed for findings related to eligibility violations (12% of participants with major non‐conformity in eligibility error domain in the risk‐adapted group versus 6% of participants in the extensive on‐site group), while remaining findings related to informed consent were higher in the extensive on‐site monitoring group (7% of participants with major non‐conformity in informed consent error domain in the risk‐adapted group versus 10% of participants in the extensive on‐site group). In the OPTIMON study, consent form signature was checked remotely using a modified consent form and a validated specific procedure in the risk‐adapted strategy ( Journot 2013 ). To summarize the domain specific monitoring outcomes of the ADAMON and OPTIMON studies, we analyzed the results of both studies within the four common error domains ( Analysis 2.1 , including unpublished results from OPTIMON). Pooling the results of the four common error domains (informed consent process, patient eligibility, endpoint assessment, and SAE reporting) resulted in a risk ratio of 0.95 (95% CI 0.81 to 1.13) in favor of the risk‐based monitoring intervention ( Figure 7 ).

An external file that holds a picture, illustration, etc.
Object name is tMR000051-FIG-07.jpg

Forest plot of comparison: 2 Risk‐based versus on‐site monitoring – error domains of major findings, outcome: 2.1 Combined outcome of major or critical findings in four error domains.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-CMP-002.01.jpg

Comparison 2: Risk‐based versus on‐site monitoring – error domains of major findings, Outcome 1: Combined outcome of critical and major findings in 4 error domains

In TEMPER, informed consent violations were more frequently identified by a full on‐site monitoring strategy ( Stenning 2018b ). During the study, but prior to the first analysis, the TEMPER Endpoint Review Committee recommended a sensitivity analysis to exclude all findings related to re‐consent, because these typically communicated minor changes in the adverse effect profile that could have been communicated without requiring re‐consent. Excluding re‐consent findings to evaluate the ability of the applied triggers to identify sites at higher risk for critical on‐site findings resulted in a significant difference of 26.2% (95% CI 8.0% to 44.4%; P = 0.007). Excluding all consent findings also resulted in a significant difference of 23.8% (95% CI 3.3% to 44.4%; P = 0.027).

There were no data on individual components of critical and major findings in  Knott 2015 .

In the START Monitoring Substudy, informed consent violations accounted for most of the primary monitoring outcomes in each group (41 [1.8%] participants in the no on‐site group versus 56 [2.7%] participants in the on‐site group) with an OR of 1.3 (95% CI 0.6 to 2.7; P = 0.46) ( Wyman 2020 ). The most common consent violation was the most recently signed consent signature page being missing and that the surveillances for these consent violations by on‐site monitors varied. Within the START Monitoring Substudy, they had to modify the primary outcome component for consent violations prior to the outcomes assessment in February 2016 because documentation and ascertainment of consent violations were not consistent across sites. This suggests that these inconsistencies and variation between sites could have influenced the results of this primary outcome component. In addition, the follow‐up on consent violations by the co‐ordinating centers identified no individuals who had not been properly consented. The largest relative difference was for the findings related to eligibility (1 [0.04%] participant in the no on‐site group versus 12 [0.6%] participants in the on‐site group; OR 12.2, 95% CI 1.8 to 85.2; P = 0.01), but 38% of eligibility violations were first identified by site staff. In addition, a relative difference was reported for SAE reporting (OR 2.0, 95% CI 1.1 to 3.7; P = 0.02), while the differences for the error domains primary endpoint reporting (OR 1.5, 95% CI 0.7 to 3.0; P = 0.27) and protocol violation of prescribing initial antiretroviral therapy not permitted by START (OR 1.4, 95% CI 0.6 to 3.4; P = 0.47) as well as for the informed consent domain were small.

There were no data on individual components of critical and major findings in MONITORING ( Fougerou‐Leurent 2019 ) or  Mealer 2013 .

There were no data on individual components of critical and major findings in  Liènard 2006 .

Impact of the monitoring strategy on participant recruitment and follow‐up

Only two included studies reported participant recruitment and follow‐up as an outcome for the evaluation of different monitoring strategies ( Liènard 2006 ; START Monitoring Substudy:  Wyman 2020 ).

Liènard 2006  assessed the impact of their monitoring approaches on participant recruitment and follow‐up in their primary outcomes. Centers were randomized to receive an on‐site initiation visit by monitors or no visit. There was no statistical difference in the number of recruited participants between these two groups (302 participants in the on‐site group versus 271 participants in the no on‐site group) as well as no impact of monitoring visits on recruitment categories (poor, average, good, and excellent). About 80% of participants were recruited in only 30 of 135 centers, and almost 62% in the 17 'excellent recruiters'. The duration of follow‐up at the time of analysis did not differ significantly between the randomized groups. However, the proportion of participants with no follow‐up at all was larger in the visited group than in the non‐visited group (82% in the on‐site group versus 70% in the no on‐site group).

Within the START Monitoring Substudy, central monitoring reports included tracking of losses to follow‐up ( Wyman 2020 ). Losses to follow‐up were similar between groups (proportion of participants lost to follow‐up: 7.1% in the on‐site group versus 8.6% in the no on‐site group; OR 0.8, 95% CI 0.5 to 1.1), and a similar percentage of study visits were missed by participants in each monitoring group (8.6% in the on‐site group versus 7.8% in the no on‐site group).

Effect of monitoring strategies on resource use (costs)

Five studies provided data on resource use.

The ADAMON study reported that with extensive on‐site monitoring, the number of monitoring visits per participant and the cumulative monitoring time on‐site was higher compared to risk‐adapted monitoring by a factor of 2.1 (monitoring visits) and 2.7 (cumulative monitoring time) (ratios of the efforts calculated within each trial and summarized with the geometric mean) ( Brosteanu 2017b ). This difference was more pronounced for the lowest risk category, resulting in an increase of monitoring visits per participant by a factor of 3.5 and an increase in the cumulative monitoring time on‐site by a factor of 5.2. In the medium‐risk category, the number of monitoring visits per participant was higher by a factor of 1.8 and the cumulative monitoring time on‐site was higher by a factor of 2.1 for the extensive on‐site group compared to the risk‐based monitoring group.

In the OPTIMON study, travel costs were calculated depending on the distance and on‐site visits were assumed to require two days for one monitor, resulting in monitoring costs of EUR 180 per visit ( Journot 2017 ). The costs were higher by a factor of 2.7 for the 100% on‐site strategy when considering travel costs only, and by a factor of 3.4 when considering travel and monitor costs.

There were no data on resource use from TEMPER ( Stenning 2018b ) or  Knott 2015 .

In the START Monitoring Substudy, the economic consequence of adding on‐site monitoring to local and central monitoring was assessed by the person‐hours that on‐site monitors and co‐ordinating centers spent performing on‐site monitoring‐related activities and was estimated to be 16,599 person‐hours ( Wyman 2020 ). With a salary allocation of USD 75 per hour for on‐site monitors, this equated to USD 1,244,925. With the addition of USD 790,467 international travel costs that were allocated for START monitoring, a total of USD 2,035,392 was attributed to on‐site monitoring. It has to be considered that there were four additional visits for cause in the on‐site group and six visits for cause in the no on‐site group.

For the MONITORING study, economic data were assessed in terms of time spent on SDV and data management with each strategy ( Fougerou‐Leurent 2019 ). A query was estimated to take 20 minutes to handle for a data manager and 10 minutes for the clinical study co‐ordinator. Across the six studies, 140 hours were devoted by the clinical research associate to the targeted SDV versus 317 hours for the full SDV. However, targeted SDV generated 587 additional queries across studies, with a range of less than one (0.3) to more than eight additional queries per participant, depending on the study. In terms of time spent on these queries, based on an estimate of 30 minutes for handling a single query, the targeted SDV‐related additional queries resulted in 294 hours of extra time spent (mean 2.4 [SD 1.7] hours per participant).   

For the cost analysis, the hourly costs for a clinical research associate were estimated to be EUR 33.00, a data‐manager was EUR 30.50, and a clinical study co‐ordinator was EUR 30.50. Based on these estimates, the targeted SDV strategy provided a EUR 5841 saving on monitoring but an additional EUR 8922 linked to the queries, totaling an extra cost of EUR 3081.

The study on remote SDV by  Mealer 2013  only compared time consumed per data item and time per case report form for both included networks. Although there was no relevant difference (less than 30 seconds) per data item between the two strategies, more time was spent with remote SDV. However, this study did not consider travel time for monitors, and the delayed access and increased response time for the communication with study co‐ordinators affected the overall time spent. The authors proposed SOPs for prescheduling times to review questions by telephone and the introduction of a single electronic health record.

For both of the introduced SDV monitoring strategies, a gain of experience with these new methods would most likely translate into improved efficiency, making it difficult to estimate the long‐term resource use from these initial studies. For the risk‐based strategy in the OPTIMON study, a remote pre‐enrollment check of consent forms was a good preventive measure and improved quality of consent forms (80% of non‐conformities identified via remote checking). In general, remote SDV monitoring may reduce the frequency of on‐site visits or influence their timing ultimately decreasing the resources needed for on‐site monitoring.

There were no data on resource use from  Liènard 2006 .

Qualitative research data or process evaluations of the monitoring interventions

The  Mealer 2013  pilot study of traditional 100% SDV versus remote SDV provided some qualitative information. This came from an informal post‐study interview of the study monitors and site co‐ordinators. These interviews revealed a high level of satisfaction with the remote monitoring process. None of the study monitors reported any difficulty with using the different electronic access methods and data review applications.

The secondary analyses of the TEMPER study assessed the ability of individual triggers and site characteristics to predict on‐site findings by comparing the proportion of visits with the outcome of interest (one major/critical finding) for triggered on‐site visits with regular (untriggered) on‐site visits ( Stenning 2018b ). This analysis also considered information of potential prognostic value obtained from questionnaires completed by the trials unit and site staff prior to the monitoring visits. Trials unit teams completed 90/94 pre‐visit questionnaires. There was no clear evidence of a linear relationship between the trial team ratings and the presence of major or critical findings, including or excluding consent findings (data not shown). A total of 76/94 sites provided pre‐visit site questionnaires. There was no evidence of a linear association between the chance of one major/critical finding and the number of active trials either per site or per staff member (data not shown). There was, however, evidence that the greater the number of different trial roles undertaken by the research nurse, the lower the probability of major/critical findings (number of research nurse roles (grouped) – proportion of one or more major or critical finding within the group, excluding re‐consent findings: less than 3: 94%; 4: 94%; 5: 80%; 6: 48% (P < 0.001; from Chi 2 test for linear trend) ( Stenning 2018b , Online Supplementary Material Table S5).

Summary of main results

We identified eight studies that prospectively compared different monitoring interventions in clinical trials. These studies were heterogeneous in design and content, and covered different aspects of new monitoring approaches. We identified no ongoing eligible studies.

Two large studies compared risk‐based versus extensive on‐site monitoring (ADAMON:  Brosteanu 2017b ; OPTIMON:  Journot 2017 ), and the pooled results provided no evidence of inferiority of a risk‐based monitoring intervention in terms of major and critical findings, based on moderate certainty of evidence ( Table 1 ). However, a formal demonstration of non‐inferiority would require more studies.

Considering the commonly reported error domains of monitoring findings (informed consent, eligibility, endpoint assessment, SAE reporting), we found no evidence for inferiority of a risk‐based monitoring approach in any of the error domains except eligibility. However, CIs were wide. To verify the eligibility of a participant usually requires extensive SDV, which might explain the potential difference in this error domain. We found a similar trend in the START Monitoring Substudy for the eligibility error domain. Expanding processes for remote SDV may improve the performance of monitoring strategies with a larger proportion of central and remote monitoring components. The OPTIMON study used an established process to remotely verify the informed consent process ( Journot 2013 ), which was shown to be efficient in reducing non‐conformities related to informed consent. A similar remote approach for SDV related to eligibility before randomization might improve the performance of risk‐based monitoring interventions in this domain.

In the TEMPER study ( Stenning 2018b ) and the START Monitoring Substudy ( Wyman 2020 ), most findings related to documenting the consent process. However, in the START Monitoring Substudy, there were no findings of participants whose consent process was inadequate and, in the ADAMON and the OPTIMON studies, findings in the informed consent process were lower in the risk‐adapted groups. Timely central monitoring of consent forms and eligibility documents with adequate anonymization ( Journot 2013 ) may mitigate the effects of many consent form completion errors and identify eligibility violations prior to randomization. This is also supported by the recently published further analysis of the TEMPER study ( Cragg 2021a ), which suggested that most visit findings (98%) were theoretically detectable or preventable through feasible, centralized processes, especially all the findings relating to initial informed consent forms, thereby preventing patients starting treatment if there are any issues.  Mealer 2013  assessed a remote process for SDV and found it to be feasible. Data values were reviewed to confirm eligibility and proper informed consent, to validate that all adverse events were reported, and to verify data values for primary and secondary outcomes. Almost all (99.6%) data values were correctly identified via remote monitoring at five different trial sites despite marked differences in remote access and remote chart review policies and technologies. In the MONITORING study, the number of remaining errors after targeted SDV (verified by full SDV) was very small for the overall data and even smaller for key data items ( Fougerou‐Leurent 2019 ). These results provide evidence that new concepts in the process of SDV do not necessarily lead to a decrease in data quality or endanger patient rights and safety. Processes involved with on‐site SDV and often referred to as source data review, that confirm that the trial conduct complies with the protocol and GCP and ensure that appropriate regulatory requirements have been followed, have to be assessed separately. Evidence from retrospective studies evaluating SDV suggest that intensive SDV is often of little benefit to clinical trials, with any discrepancies found having minimal impact on the robustness of trial conclusions ( Andersen 2015 ;  Olsen 2016 ;  Tantsyura 2015 ;  Tudur Smith 2012a ).

Furthermore, we found evidence that central monitoring can guide on‐site monitoring of trial sites via triggers. The prespecified sensitivity analysis of the TEMPER results excluding re‐consent findings ( Stenning 2018b ) and the results from  Knott 2015  suggested that using triggers from a central monitoring process can identify sites at higher risk for major GCP violations. However, the triggers used in TEMPER may not have been ideal for all included trials and some tested triggers seemed not to have any prognostic value. Additional work is needed to identify more discriminatory triggers and should encompass work on key performance indicators ( Gough 2016 ) and central statistical monitoring ( Venet 2012 ). Since  Knott 2015  focused on one study only, the triggers used in TEMPER were more trial specific. Developing trial specific triggers may lead to even more efficient triggers for on‐site monitoring. This may help to distinguish low performing sites from high performing sites and guide monitors to the most urgent problems within the identified site. Study‐specific triggers could even provoke specific monitoring activities (e.g. staff turnover indicates additional training, or data quality issues could trigger SDV activities). Central review of information across sites and time would help direct the on‐site resources to targeted SDV and activities best performed in‐person, for example, process review or training. We found no evidence that the addition of untriggered on‐site monitoring to central statistical monitoring assessed in the START Monitoring Substudy had a major impact on trial results or on participants' rights and safety ( Wyman 2020 ). In addition, there was no evidence that the no on‐site group was inferior in the study‐specific secondary outcomes including the percentage of participants lost to follow‐up, timely data submission and query resolution, and the absolute number of monitoring outcomes in the START Monitoring Substudy was very low ( Wyman 2020 ). This might be due to a study‐specific definition of critical and major findings in the monitoring plan and the presence of an established central monitoring system in both intervention groups of the study.

With respect to resource use, both studies evaluating a risk‐based monitoring approach showed that considerable resources could be saved with risk‐based monitoring (factor three to five;  Brosteanu 2017b ;  Journot 2017 ). However, the potential increase in resource use at the co‐ordinating centers (including data management) was not considered in any of the analyses. The START Monitoring Substudy reported more than USD 2,000,000 for on‐site monitoring, taking into account the monitoring hours as well as the international travel costs ( Wyman 2020 ). In both groups, central and local monitoring by site staff were performed to an equal extent, suggesting that there is no difference in the resources consumed by data management. The MONITORING study reported a reduction in cost of on‐site monitoring by the targeted SDV approach, but this was offset by an increase in data management resources due to queries ( Fougerou‐Leurent 2019 ). This increase in data management resources may to some degree be due to the inexperience with the new approach of site staff and trial monitors. There was no statistical difference in number of queries related to key data between targeted SDV and full SDV. When an infrastructure for centralized monitoring and remote data checks is already established, a larger difference between resources spent on risk‐based compared to extensive on‐site monitoring would be expected. Setting up the infrastructure for automated checks, remote processes, and other data management structures as well as the training of monitors and data managers on a new monitoring strategy requires an upfront investment.

Only two studies assessed the impact of different monitoring strategies on recruitment and follow‐up. This is an important outcome for monitoring interventions because it is crucial for the successful completion of a clinical trial ( Houghton 2020 ). The START Monitoring study found no significant difference in the percentage of participants lost to follow‐up between the on‐site and no on‐site groups ( Wyman 2020 ). Also, on‐site initiation visits had no effect on participant recruitment in  Liènard 2006 . Closely monitoring site performance in terms of recruitment and losses to follow‐up could enable early action to support affected sites. Secondary qualitative analyses of the TEMPER study revealed that the experience of the research nurse had an impact on the monitoring outcomes ( Stenning 2018b ). The experience of the study team and the site staff might also be an important factor to be considered in a risk assessment of the study or in the prioritization of on‐site visits.   

Overall completeness and applicability of evidence

Although we extensively searched for eligible studies, we only found one or two studies for specific comparisons of monitoring strategies. This very limited evidence base stands in stark contrast to the number of clinical trials run each year, each of which needs to perform monitoring in some form. None of the included studies reported on all primary and secondary outcomes specified for this review and most studies reported only a few. For instance, only one study reported on participant recruitment ( Liènard 2006 ), and only two studies reported on participant retention ( Liènard 2006 ;  Wyman 2020 ). Some monitoring comparisons were nested in a single clinical trial limiting the generalizability of results (e.g. Knott 2015; START Monitoring:  Wyman 2020  ). However, the OPTIMON ( Journot 2017 ) and ADAMON ( Brosteanu 2017b ) studies included multiple and heterogeneous clinical trials for their comparison of risk‐based and extensive on‐site monitoring strategies increasing the generalizability of their results. The risk assessments of the ADAMON and OPTIMON studies differed in certain aspects ( Table 7 ), but the main concept of categorizing studies according to their evaluated risk and adapting the monitoring requirements depending on the risk category was very similar. The much lower number of overall monitoring findings in the START study (based on one clinical trial only) compared with OPTIMON or ADAMON (involving multiple clinical trials) suggests that the trial context is crucial with respect to monitoring findings. Violations considered in the primary outcome of the START Monitoring Substudy were tailored to issues that could impact the validity of the trial's results or the safety of study participants. A definition of assets focused on the most critical aspects of a study that should be monitored closely is often missing in extensive monitoring plans and allows for some margin of interpretation by study monitors.

The TEMPER study introduced triggers that could direct on‐site monitoring and evaluated the prognostic values of these triggers ( Stenning 2018b ). Only three of the proposed triggers showed a significant prognostic impact across all three included trials. A set of triggers or performance measures of trial sites that are promising indicators for the need of additional support across a wide range of clinical trials are yet to be determined and trigger refinement is still ongoing. Triggers will to some degree always depend on the specific risks determined by the study procedures, management structure, and design of the study at hand. A combination of performance metrics appropriate for a large group of trials and study‐specific performance measures might be most effective. Multinational, multicenter trials might benefit the most from the directing of on‐site monitoring to sites that show low quality of performance. More studies in trials with large numbers of participants and sites, and trials covering diverse geographic areas, are needed to assess the value of centralized monitoring to assist with the identification of sites where additional support in terms of training is needed the most. This would lead to a more 'needs‐oriented' approach, so that clinical routine and study processes in well‐performing sites will not be unnecessarily interrupted. An overview of the progress of the ongoing trial in terms of site performance and other aspects such as recruitment and retention would also support the whole complex management processes of trial conduct in these large trials.

Since this review focused on prospective comparisons of monitoring interventions, the evidence from retrospective studies and reports from implementation studies is not included in the above results but is discussed below. We excluded retrospective studies because standardization of extracted data is not possible since data were collected before considering the analysis, especially for our primary outcome. However, trending analyses provide valuable information on outcomes such as improved data quality, recruitment, and follow‐up compliance, and thus demonstrate the effect of monitoring approaches on the overall trial conduct and success of the study. We considered the results from retrospective studies in our discussion of monitoring strategies but also pointed out the need to establish more SWAT to prospectively compare methods with a predefined mode of analysis.

Quality of the evidence

Overall, the certainty of this body of evidence on monitoring strategies for clinical intervention studies was low or very low for most comparisons and outcomes ( Table 1 ;  Table 2 ;  Table 3 ;  Table 4 ;  Table 5 ). This was mainly due to imprecision of effect estimates because of small numbers of observations and indirectness because some comparisons were based on only one study nested in a single trial. The included studies varied considerably in terms of the reported outcomes with most studies reporting only some. In addition, the risk of bias varied across studies. A risk of performance bias was attributed to six of the included studies and was unclear in two studies. Since it was difficult to blind monitors to the different monitoring interventions, an influence of the monitors' performance on the monitoring outcomes could not be excluded in these studies. Two studies were at high risk of bias because of their non‐randomized design ( Knott 2015  ; TEMPER:  Stenning 2018b ). However, since the intervention determined the selection of sites for an on‐site visit in the triggered groups, a randomized design was not practicable. In addition, the TEMPER study attempted to balance groups by design and controlled the risk of known confounding factors by using a matching algorithm. Therefore, the judgment of high risk of bias for TEMPER ( Stenning 2018b ) and  Knott 2015  remains debatable. In the START Monitoring Substudy, no independent validation of remaining findings was performed after monitoring intervention. Therefore, it is uncertain if central monitoring without on‐site monitoring missed any major GCP violations and chance findings cannot be ruled out. More evidence is needed to evaluate the value of on‐site initiation visits.  Liènard 2006  found no evidence that on‐site initiation visits affected participant recruitment, or data quality in terms of timeliness of data transfer and data queries. However, the informative value of the study was limited by its early termination and the small number of ongoing monitoring visits. In general, embedding methodology studies in clinical intervention trials provides valuable information for the improvement and adaptation of methodology guidelines and the practice of trials ( Bensaaud 2020 ;  Treweek 2018a ;  Treweek 2018b ). Whenever randomization is not practicable in a methodology substudy, the attempt to follow a 'diagnostic study design' and minimize confounding factors as much as possible can increase the generalizability and impact of the study results.

Potential biases in the review process

We screened all potentially relevant abstracts and full‐text articles independently and in duplicate, assessed the risk of bias for included studies independently and in duplicate, and extracted information from included studies independently and in duplicate. We did not calculate any agreement statistics, but all disagreements were resolved by discussion. We successfully contacted authors from all included studies for additional information. Since we were unable to extract only the outcomes of the randomized trials included in the OPTIMON study ( Journot 2015 ), we used the available data that included mainly randomized trials but also a few cohort and cross‐sectional studies. The focus of this review was on monitoring strategies for clinical intervention studies and including all studies from the OPTIMON study might introduce some bias. With regard to the pooling of study results, our judgment of heterogeneity might be debatable. The process of choosing comparator sites for triggered sites differed between the TEMPER study ( Stenning 2018b ) and  Knott 2015 . While both studies selected high scoring sites for triggered monitoring and low scoring sites as control, the TEMPER study applied a matching algorithm to identify sites that resembled the high scoring sites in certain parameters. In  Knott 2015 , comparator sites from the same countries were identified by the country teams as potentially problematic among the low scoring sites without a pairwise matching to a high scoring site. However, the principle of choosing sites for evaluation based on results from central statistical monitoring closely resembled methods used in the TEMPER study. Therefore, we decided to pool results from TEMPER and  Knott 2015 .

Agreements and disagreements with other studies or reviews

Although there are no definitive conclusions from available research comparing the effectiveness of risk‐based monitoring tools, the OECD advises clinical researchers to use risk‐based monitoring tools ( OECD 2013 ). They emphasized that risk‐based monitoring should become a more reactive process where the risk profile and performance is continuously reviewed during trial conduct and monitoring practices are modified accordingly. One systematic review on risk‐based monitoring tools for clinical trials by Hurley and colleagues summarized a variety of new risk‐based monitoring tools for clinical trial monitoring that had been implemented in recent years by grouping common ideas ( Hurley 2016 ). They did not identify a standardized approach for the risk assessment process for a clinical trial in the 24 included risk‐based monitoring tools, although the process developed by TransCelerate BioPharma Inc. has been replicated by six other risk‐based monitoring tools ( TransCelerate BioPharma Inc 2014 ). Hurley and colleagues suggested that the responsiveness of the tool depends on their mode of administration (paper‐based, powered by Microsoft Excel, or operated as a Service as a system) and the degree of centralized monitoring involved ( Hurley 2016 ). An electronic data capture system is beneficial to the efficient performance of centralized monitoring. However, to support the reactive process of risk‐based monitoring, tools should be able to incorporate information on risks provided by on‐site experiences from the study monitors. This is in agreement with our findings that a risk‐based monitoring tool should support both on‐site and centralized monitoring and that assessments are continuously reviewed during study conduct. Monitoring is most efficient when integrated as part of a risk‐based quality management system as also discussed by Buyse et al. ( Buyse 2020 ), where a focus on trial aspects that have a potentially high impact on patient safety and trial validity and on systematic errors is emphasized.

From the five main comparisons that we identified through our review, four have also been assessed in available retrospective studies. 

Risk‐based versus extensive on‐site monitoring: Kim and colleagues retrospectively reviewed three multicenter, investigator‐initiated trials that were monitored by a modified ADAMON method consisting of on‐site and central monitoring according to the risk of the trial ( Kim 2021 ). Central monitoring was more effective than on‐site monitoring in revealing minor errors and showed comparable results in revealing major issues such as investigational product compliance and delayed reporting of SAEs. The risk assessment assessed by Higa and colleagues was based on the Risk Assessment Categorization Tool (RACT) originally developed by TransCelerate BioPharma Inc. ( TransCelerate BioPharma Inc 2014 ), and was continuously adopted during the study based on results of centralized monitoring in parallel with site (on‐site/off‐site) monitoring. Mean on‐site monitoring frequency decreased as the study progressed and a Pharmaceutical and Medical Devices Agency inspection after study end found no significant non‐conformance that would have affected the study results and patient safety ( Higa 2020 ). 

Central monitoring with triggered on‐site visits versus regular on‐site visits: several studies have assessed triggered monitoring approaches that depend on individual study risks in trending analysis of their effectiveness. Diani and colleagues evaluated the effectiveness of their risk‐based monitoring approach in clinical trials involving implantable cardiac medical devices ( Diani 2017 ). Their strategy included a data‐driven risk assessment methodology to target on‐site monitoring visits and they found significant improvement in data quality related to the three risk factors that were most critical to the overall compliance of cardiac rhythm management along with an improvement in a majority of measurable risk factors at the worst performing site quantiles. The methodology evaluated by Agrafiotis and colleagues is centered on quality by design, central monitoring, and triggered, adaptive on‐site and remote monitoring. The approach is based on a set of risk indicators that are selected and configured during the setup of each trial and are derived from various operational and clinical metrics. Scores from these indicators form the basis of an automated, data‐driven recommendation on whether to prioritize, increase, decrease, or maintain the level of monitoring intervention at each site. They assessed the trending impact of their new approach by retrospectively analyzing the change in risk level later in the trials. All 12 included trials showed a positive effect in risk level change and results were statistically significant in eight of them ( Agrafiotis 2018 ). The evaluation of a new trial management method for monitoring and managing data return rates in a multicenter phase III trial performed by Cragg and colleagues adds to the findings of increased efficiency by prioritizing sites for support ( Cragg 2019 ). Using an automated database report to summarize the data return rate, overall and per center, enabled the early notification of centers whose data return rate appeared to be falling, or crossed the predefined acceptability threshold of data return rate. Concentrating on the gradual improvement of centers having persistent data return problems, resulted in an increase in the overall data return rate and return rates above 80% in all centers. These results agree with the evidence we found for the effectiveness of a triggered monitoring approach evaluated in TEMPER ( Stenning 2018b ) and  Knott 2015 , and emphasize the need for study‐specific performance indicators. In addition, the data‐driven risk assessment implemented by  Diani 2017  highlighted key focus areas for both on‐site and centralized monitoring efforts and enabled an emphasis of site performance improvements where it is needed the most. Our findings agree with retrospective assessments that focusing on the most critical aspects of a trial and guiding monitoring resources to trial sites in need of support may be efficient to improve the overall trial conduct.

Central statistical v ersu s on‐site monitoring: one retrospective analysis of the potential of central monitoring to completely replace on‐site monitoring performed by trial monitors showed that the majority of reviewed on‐site findings could be identified using central monitoring strategies ( Bakobaki 2012 ). One recent scoping review focused on methods used to identify sites of 'concern', at which monitoring activity may be targeted, and consequently sites 'not of concern', monitoring of which may be reduced or omitted ( Cragg 2021b ). It included all original reports describing methods for using centrally held data to assess site‐level risk described in a reproducible way. Thus, in agreement with our research, they only identified one full report of a study ( Stenning 2018b ) that prospectively assessed the methods' ability to target on‐site monitoring visits to most problematic sites. However, through contacting the authors of  Knott 2015 , which is only available as an abstract, we gained more detailed information on the methodology of the study and were able to include the results in our review. In contrast to our review,  Cragg 2021b  included retrospective assessments (in comparison to on‐site monitoring, effect on data quality or other trial parameters) as well as case studies, illustrations of methods on data, assessment of methods' ability to identify simulated problem sites, or known problems in real trial data. Thus, it constitutes an overview of methods introduced to the research community, and simultaneously underlines the lack of evidence for their efficacy or effectiveness.

Traditional 100% SDV versus targeted or remote SDV: in addition to these retrospective evaluations of methods to prioritize sites and the increased use of centralized monitoring methods, several studies retrospectively assessed the value and effectiveness of remote monitoring methods including alternative SDV methods. Our findings related to a reduction of 100% on‐site SDV in  Mealer 2013  and the MONITORING study ( Fougerou‐Leurent 2019 ) are in agreement with  Tudur Smith 2012b , which assessed the value of 100% SDV in a cancer clinical trial. In their retrospective comparison of data discrepancies and comparative treatment effects obtained following 100% SDV to those based on data without SDV, the identified discrepancies for the primary outcome did not differ systematically across treatment groups or across sites and had little impact on trial results. They also suggested that a focus of SDV on less‐experienced sites or sites with differing reporting characteristics of SDV‐related information (e.g. SAE reporting compared to other sites), with provision of regular training may be more efficient. Similarly, the study by Anderson and colleagues analyzed error rates of data from three randomized phase III trials monitored with a combination of complete SDV or partial SDV that were subjected to post hoc complete SDV ( Andersen 2015 ). Comparing partly and fully monitored trial participants, there were only minor differences between variables of major importance to efficacy or safety. In agreement with these studies, the study by Embleton‐Thirsk and colleagues showed that the impact of extensive retrospective SDV and further extensive quality checks in a phase III academic‐led, international, randomized cancer trial was minimal ( Embleton‐Thirsk 2019 ). Besides the potential reduction in SDV, remote monitoring systems for full or partial SDV are becoming more relevant during the COVID‐19 pandemic and are currently evaluated in various forms. Another recently published study assessed the clinical trial monitoring effectiveness of remote risk‐based monitoring versus on‐site monitoring with 100% SDV ( Yamada 2021 ). It used a cloud‐based remote monitoring system that does not require site‐specific infrastructure for remote monitoring since it can be downloaded onto mobile devices as an application and involves the upload of photographs. Remote monitoring was focused on risk items that could lead to critical data and process errors, determined using the risk assessment and categorization tool developed by TransCelerate BioPharma Inc. ( TransCelerate BioPharma Inc 2014 ). Using this approach, 92.9% (95% CI 68.5% to 98.7%) of critical process errors could be detected by remote risk‐based monitoring. With a retrospective review of monitoring reports, Hirase and colleagues supported an increased efficiency of monitoring and resources used by a combination of on‐site and remote monitoring using a web‐conference system ( Hirase 2016 ).

The qualitative finding in TEMPER ( Stenning 2018b ) that the experience of the research nurse had an impact on the monitoring outcomes is also reflected in the retrospective study by von Niederhäusern and colleagues, which found that one of the factors associated with lower numbers of monitoring findings was experienced site staff and concluded that the human factor was underestimated in the current risk‐based monitoring approach ( von Niederhausern 2017 ).

Implication for systematic reviews and evaluations of healthcare

We found no evidence for inferiority of a risk‐based monitoring approach compared to extensive on‐site monitoring in terms of critical and major monitoring findings. The overall certainty of the evidence for this outcome was moderate. The initial risk assessment of a study can facilitate a reduction of monitoring. However, it might be more efficient to use the outcomes of a risk assessment to guide on‐site monitoring in terms of prioritizing sites with conspicuously low performance quality of critical assets identified by the risk assessment. Some triggers that were used in the TEMPER study ( Stenning 2018b ) and  Knott 2015  could help identify sites that would benefit the most from an on‐site monitoring visit. Trigger refinement and inclusion of more trial‐specific triggers will, however, be necessary. The development of remote access to trial documentation may further improve the impact of central triggers. Timely central monitoring of consent forms or eligibility documents with adequate anonymization and data protection may mitigate the effects of many formal documentation errors. More studies are needed to assess the feasibility of eligibility and informed consent‐related assessment and remote contact to the site teams in terms of data security and effectiveness without on‐site review of documents. The COVID‐19 pandemic has resulted in innovative monitoring approaches in the context of restricted on‐site monitoring that also includes the remote monitoring of consent forms and other original records as well as compliance to study procedures usually verified on‐site. Whereas central data monitoring and remote monitoring of documents were formerly applied to improve efficiency, it now has to substitute on‐site monitoring to comply with pandemic restrictions, making evaluated monitoring methods in this review even more valuable to the research community. Both the Food and Drug Administration (FDA) and European Medicines Agency have provided guidance on aspects of clinical trial conduct during the COVID‐19 pandemic including remote site monitoring, handling informed consent in remote settings, and the importance of maintaining data integrity and audit trail ( EMA 2021 ;  FDA 2020 ). The FDA has also adopted contemporary approaches to consent involving telephone calls or video visits in combination with a witnessed signing of the informed consent ( FDA 2020 ). Experiences on new informed consent processes and advice on how remote monitoring and centralized methods can be used to protect the safety of patients and preserve trial integrity during the pandemic have been published and provide additional support for sites and sponsors ( Izmailova 2020 ;  Love 2021 ;  McDermott 2020 ). This review may support study teams faced by pandemic‐related restrictions with information on evaluated methods that focus primarily on remote and centralized methods. It will be important to provide more management support for clinical trials in the academic setting and develop new recruitment strategies. In our review, low certainty of evidence suggested that initiation visits or more frequent on‐site visits were not associated with increased recruitment or retention of trial participants. Consequently, trial investigators should plan for other, more trial‐specific strategies to support recruitment and retention. To what extent recruitment or retention can be improved through real‐time central monitoring remains to be evaluated. Research has emphasized the need for evidence on effective recruitment strategies ( Treweek 2018b ), and new flexible recruitment approaches initiated during the pandemic may add to this. During the COVID‐19 pandemic, both social media and digital health platforms have been leveraged in novel ways to recruit heterogeneous cohorts of participants ( Gaba 2020 ). In addition, the pandemic underlines the need for a study management infrastructure supported by central data monitoring and remote communication ( Shiely 2021 ). One retrospective study at the Beijing Cancer Hospital assessed the impact of their newly implemented remote management model on critical trial indicators: protocol compliance rate, rate of loss to follow‐up, rate of participant withdrawal, rates of disease progression and mortality, and detection rate of monitoring problems ( Fu 2021 ). The measures implemented after the first COVID‐19 outbreak led to significantly higher rates of protocol compliance and significantly lower rates of loss to follow‐up or withdrawal after the second outbreak compared to the first, without affecting rates of disease progression or mortality. In general, new experiences with electronic methods initiated throughout the COVID‐19 pandemic might facilitate development and even improvement of clinical trial management.

Implication for methodological research

Several new monitoring interventions were introduced in recent years. However, the evidence base gathered for this Cochrane Review is limited in terms of quantity and quality. Ideally, for each of the five identified comparisons (risk‐based versus extensive on‐site monitoring, central statistical monitoring with triggered on‐site visits versus regular [untriggered] on‐site visits, central and local monitoring with annual on‐site visits versus central and local monitoring only, traditional 100% source data verification [SDV] versus remote or targeted SDV, and on‐site initiation visit versus no on‐site initiation visit) more randomized monitoring studies nested in clinical trials and measuring effects on all outcomes specified in this review are necessary to draw more reliable conclusions. The development of triggers to guide on‐site monitoring while centrally monitoring incoming data is ongoing and different triggers might be used in different settings. In addition, more evidence on risk indicators that help to identify sites with problems or the prognostic value of triggers is needed to further optimize central monitoring strategies. Future methodological research should particularly evaluate approaches with an initial trial‐specific risk assessment followed by close central monitoring and the possibility for triggered and targeted on‐site visits during trial conduct. Outcome measures such as the impact on recruitment, retention, and site support should be emphasized in further research and the potential of central monitoring methods to support the whole study management process needs to be evaluated. Directing monitoring resources to sites with problems independent of data quality issues (recruitment, retention) could promote the role of experienced study monitors as a site support team in terms of training and advice. The overall progress in conduct and success of a trial should be considered in the evaluation of every new approach. The fact that most of the eligible studies identified for this review are government or charity funded suggests a need for industry‐sponsored trials to evaluate their monitoring and management approaches. This could particularly promote the development and evaluation of electronic case report form‐based centralized monitoring tools, which require substantial resources.

Protocol first published: Issue 12, 2019 Review first published: Issue 12, 2021

Acknowledgements

We thank the monitoring team of the Department of Clinical Research at the University Hospital Basel, including Klaus Ehrlich, Petra Forst, Emilie Müller, Madeleine Vollmer, and Astrid Roesler, for sharing their experience and contributing to discussions on monitoring procedures. We would further like to thank the information specialist Irma Klerings for peer reviewing our electronic database searches.

Appendix 1. Search strategies CENTRAL, PubMed, and Embase

Cochrane Review on monitoring strategies: search strategies Terms shown in italics were different compared to the strategy in PubMed.

3 May 2019: 842 hits (836 trials/6 reviews); Update 16 March 2021: 1044 hits (monitor* NEAR/2 (site OR risk OR central*)): ti,ab OR "monitoring strategy":ti,ab OR "monitoring method":ti,ab OR "monitoring technique":ti,ab OR "triggered monitoring":ti,ab OR "targeted monitoring":ti,ab OR "risk proportionate":ti,ab OR "trial monitoring":ti,ab OR "study monitoring":ti,ab OR "statistical monitoring":ti,ab

PubMed 13 May 2019: 1697 hits; Update 16 March 2021: 2198 hits

("on site monitoring"[tiab] OR "on‐site monitoring"[tiab] OR "monitoring strategy"[tiab] OR "monitoring method"[tiab] OR "monitoring technique"[tiab] OR "triggered monitoring"[tiab] OR "targeted monitoring"[tiab] OR "risk‐adapted monitoring"[tiab] OR "risk adapted monitoring"[tiab] OR "risk‐based monitoring"[tiab] OR "risk based monitoring"[tiab] OR "risk proportionate"[tiab] OR "centralized monitoring"[tiab] OR "centralised monitoring"[tiab] OR "statistical monitoring"[tiab] OR "central monitoring"[tiab] OR “trial monitoring”[tiab] OR “study monitoring”[tiab]) AND ("Clinical Studies as Topic"[Mesh] OR (("randomized controlled trial"[pt] OR controlled clinical trial[pt] OR trial*[tiab] OR study[tiab] OR studies[tiab]) AND (conduct*[tiab] OR practice[tiab] OR manag*[tiab] OR standard*[tiab] OR harmoni*[tiab] OR method*[tiab] OR quality[tiab] OR performance[tiab])))

Embase (via Elsevier) 13 May 2019: 1245 hits; Update 16 March 2021: 1494 hits ('monitoring strategy':ti,ab OR 'monitoring method':ti,ab OR 'monitoring technique':ti,ab OR 'triggered monitoring':ti,ab OR 'targeted monitoring':ti,ab OR 'risk‐adapted monitoring':ti,ab OR 'risk adapted monitoring':ti,ab OR 'risk based monitoring'/exp OR 'risk proportionate':ti,ab OR 'trial monitoring':ti,ab OR 'study monitoring':ti,ab OR 'statistical monitoring':ti,ab OR (monitor* NEAR/2 (site OR risk OR central*)):ti,ab) AND ('clinical trial (topic)'/exp OR ((trial* OR study OR studies) NEAR/3 (conduct* OR practice OR manag* OR standard* OR harmoni* OR method* OR quality OR performance)):ti,ab)

Appendix 2. Grey literature search

(Discipline: Medicine)

British Library

Direct Plus

BIOSIS databases ( www.biosis.org/ ).

Web of Science

Citation Index

(Conferences)

Web of Science (Core Collection) Proceedings Paper, Meeting Abstracts

Handsearch of References in identifies articles

WHO Registry (ICTRP portal)

Risk‐based Monitoring Toolbox

Appendix 3. Data collection form content

1. General Information

Name of person extracting data, report title, report ID, publication type, study funding source, possible conflicts of interest.

2. Methods and study population (trials)

Study design, duration study, design of host trials, characteristics of host trials (primary care, tertiary care, allocated …), total number of sites randomized, total number of sites included in the analysis, stratification of sites. Example: stratified on risk level, country, projected enrolment etc., inclusion/exclusion criteria for host trials.

3. Risk of bias assessment

Random sequence generation, allocation concealment, blinding of outcome assessment, performance bias, incomplete outcome data, selective outcome reporting, other bias, validated outcome assessment – grading of findings (minor, major, critical).

4. Intervention groups

Number randomized to group, duration of intervention period, was there an initial risk assessment preceding the monitoring plan?, classification of trials/sites, risk assessment characteristics, differing monitoring plan for risk classification groups, what was the extent of on‐site monitoring in the risk‐based monitoring group?, triggers or thresholds that induced on‐site monitoring, targeted on‐site monitoring visits or according to the original trials monitoring plan?, timing (frequency of monitoring visits, frequency of central/remote monitoring), number of monitoring visits per participant, cumulative monitoring time on‐site, mean number of monitoring visits per site, delivery (procedures used for central monitoring structure/components of on‐site monitoring triggers/thresholds), who performed the monitoring (part of study team, trial staff – qualification of monitors), degree of source data verification (median number of participants undergoing source data verification), co‐interventions (site/study‐specific co‐interventions).

5. Outcomes

Primary outcome, secondary outcomes, components of primary outcome (finding error domains), predefined level of outcome variables (major, critical, others, upgraded)?, time points measured (end of trial/during trial), factors impacting the outcome measure, person performing the outcome assessment, was outcome/tool validated?, statistical analysis of outcome data, imputation of missing data.

Comparison of interventions, outcome, subgroup (error domains), postintervention or change from baseline?, unit of analysis, statistical methods used and appropriateness of these methods.

7. Other information (key conclusions of study authors).

Appendix 4. Risk of bias assessment for non‐randomized studies

Edited (no change to conclusions)

Data and analyses

Comparison 1, comparison 2, comparison 3, comparison 4, characteristics of studies, characteristics of included studies [ordered by study id].

ARDS network: Acute Respiratory Distress Syndrome network; ChiLDReN: Childhood Liver Disease Research Network; CRA: clinical research associate; CRF: case report form; CTU: clinical trials unit; DM: data management; SAE: serious adverse event; SDV: source data verification.

Characteristics of excluded studies [ordered by study ID]

Differences between protocol and review.

We did not estimate the intracluster correlation and heterogeneity across sites within the ADAMON and OPTIMON studies as planned in our review protocol (Klatte 2019) due to lack of information. .

We planned in the protocol to assess the statistical heterogeneity of studies in meta‐analyses. Due to the small number of included studies per comparison, it was not reasonable to assess heterogeneity statistically.

Planned sensitivity analyses were also not performed because of the small number of included studies.

We removed characteristics of monitoring strategies from the list of secondary outcomes upon request of reviewers and included the information in the section on general characteristic of included studies. We changed the order of the secondary outcomes in an attempt to improve the logical flow of the Results section.

Contributions of authors

KK, CPM, and MB conceived the study and wrote the first draft of the protocol.

SL, MS, PB, NB, HE, PAJ, and MMB reviewed the protocol and suggested changes for improvement.

HE and KK developed the search strategy and conducted all searches.

KK, CPM, and MB screened titles and abstracts as well as full texts, and selected eligible studies.

KK and MMB extracted relevant data from included studies and assessed risk of bias.

KK conducted the statistical analyses and interpreted the results together with MB and CPM.

KK and MB assessed the certainty of the evidence according to GRADE and wrote the first draft of the review manuscript.

CPM, SL, MS, PB, NB, HE, PAJ, and MMB critically reviewed the manuscript and made suggestions for improvement.

Sources of support

Internal sources.

The Department of Clinical Research provided salaries for review contributors.

External sources

  • No sources of support provided

Declarations of interest

MS was a co‐investigator on an included study (TEMPER), but had no role in study selection, risk of bias, or certainty of evidence assessment for this review. He has no other relevant conflicts to declare.

References to studies included in this review

Brosteanu 2017b {published data only}.

  • Brosteanu O, Houben P, Ihrig K, Ohmann C, Paulus U, Pfistner B, et al. Risk analysis and risk adapted on-site monitoring in noncommercial clinical trials . Clinical Trials 2009; 6 :585-96. [ PubMed ] [ Google Scholar ]
  • Brosteanu O, Schwarz G, Houben P, Paulus U, Strenge-Hesse A, Zettelmeyer U, et al. Risk-adapted monitoring is not inferior to extensive on-site monitoring: results of the ADAMON cluster-randomised study . Clinical Trials 2017; 14 :584-96. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Study protocol ("Prospektive cluster-randomisierte Untersuchung studienspezifisch adaptierterStrategien für das Monitoring vor Ort in Kombination mit zusätzlichenqualitätssichernden Maßnahmen") . www.tmf-ev.de/ADAMON/Downloads.aspx (accessed prior to 19 August 2021).

Fougerou‐Leurent 2019 {published and unpublished data}

  • Fougerou-Leurent C, Laviolle B, Bellissant E. Cost-effectiveness of full versus targeted monitoring of randomized controlled trials . Fundamental & Clinical Pharmacology 2018; 32 ( S1 ):49 (PM2-035). [ Google Scholar ]
  • Fougerou-Leurent C, Laviolle B, Tual C, Visseiche V, Veislinger A, Danjou H, et al. Impact of a targeted monitoring on data-quality and data-management workload of randomized controlled trials: a prospective comparative study . British Journal of Clinical Pharmacology 2019; 85 ( 12 ):2784-92. [DOI: 10.1111/bcp.14108] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Journot 2017 {published and unpublished data}

  • Journot V, Perusat-Villetorte S, Bouyssou C, Couffin-Cadiergues S, Tall A, Chene G. Remote preenrollment checking of consent forms to reduce nonconformity . Clinical Trials 2013; 10 :449-59. [ PubMed ] [ Google Scholar ]
  • Journot V, Pignon JP, Gaultier C, Daurat V, Bouxin-Metro A, Giraudeau B, et al. Validation of a risk-assessment scale and a risk-adapted monitoring plan for academic clinical research studies – the Pre-Optimon study . Contemporary Clinical Trials 2011; 32 :16-24. [ PubMed ] [ Google Scholar ]
  • Journot V. OPTIMON – first results of the French trial on optimisation of monitoring . ssl2.isped.u-bordeaux2.fr/OPTIMON/docs/Communications/2015-Montpellier/OPTIMON%20-%20EpiClin%20Montpellier%202015-05-20%20EN.pdf (accessed 2 October 2019).
  • Journot V. OPTIMON – the French trial on optimization of monitoring . SCT Annual Meeting; 2017 May 7-10; Liverpool, UK .
  • Study protocol: evaluation of the efficacy and cost of two monitoring strategies for public clinical research. OPTIMON study: OPTImisation of MONitoring . ssl2.isped.u-bordeaux2.fr/OPTIMON/DOCS/OPTIMON%20-%20Protocol%20v12.0%20EN%202008-04-21.pdf (accessed prior to 19 August 2021).

Knott 2015 {published and unpublished data}

  • Knott C, Valdes-Marquez E, Landray M, Armitage J, Hopewell J. Improving efficiency of on-site monitoring in multicentre clinical trials by targeting visits . Trials 2015; 16 ( Suppl 2 ):O49. [ Google Scholar ]

Liènard 2006 {published data only}

  • Liénard JL, Quinaux E, Fabre-Guillevin E, Piedbois P, Jouhaud A, Decoster G, et al. Impact of on-site initiation visits on patient recruitment and data quality in a randomized trial of adjuvant chemotherapy for breast cancer . Clinical Trials 2006; 3 ( 5 ):486-92. [DOI: 10.1177/1740774506070807] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Mealer 2013 {published data only}

  • Mealer M, Kittelson J, Thompson BT, Wheeler AP, Magee JC, Sokol RJ, et al. Remote source document verification in two national clinical trials networks: a pilot study . PloS One 2013; 8 ( 12 ):e81890. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Stenning 2018b {published data only}

  • Cragg WJ, Hurley C, Yorke-Edwards V, Stenning SP. Assessing the potential for prevention or earlier detection of on-site monitoring findings from randomised controlled trials: further analyses of findings from the prospective TEMPER triggered monitoring study . Clinical Trials 2021; 18 ( 1 ):115-26. [DOI: 10.1177/1740774520972650] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Diaz-Montana C, Choudhury R, Cragg W, Joffe N, Tappenden N, Sydes MR, et al. Managing our TEMPER: monitoring triggers and site matching algorithms for defining triggered and control sites in the temper study . Trials 2017; 18 :P149. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Diaz-Montana C, Cragg WJ, Choudhury R, Joffe N, Sydes MR, Stenning SP. Implementing monitoring triggers and matching of triggered and control sites in the TEMPER study: a description and evaluation of a triggered monitoring management system . Trials 2019; 20 :227. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stenning SP, Cragg WJ, Joffe N, Diaz-Montana C, Choudhury R, Sydes MR, et al. Triggered or routine site monitoring visits for randomised controlled trials: results of TEMPER, a prospective, matched-pair study . Clinical Trials 2018; 15 :600-9. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Study protocol: TEMPER (TargetEd Monitoring: Prospective Evaluation and Refinement) prospective evaluation and refinement of a targeted on-site monitoring strategy for multicentre cancer clinical trials . journals.sagepub.com/doi/suppl/10.1177/1740774518793379/suppl_file/793379_supp_mat_2.pdf (accessed prior to 19 August 2021).

Wyman 2020 {published data only}

  • Hullsiek KH, Kagan JM, Engen N, Grarup J, Hudson F, Denning ET, et al. Investigating the efficacy of clinical trial monitoring strategies: design and implementation of the cluster randomized START monitoring substudy . Therapeutic Innovation and Regulatory Science 2015; 49 :225-33. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Wyman Engen N, Huppler Hullsiek K, Belloso WH, Finley E, Hudson F, Denning E, et al. A randomized evaluation of on-site monitoring nested in a multinational randomized trial . Clinical Trials 2020; 17 ( 1 ):3-14. [DOI: 10.1177/1740774519881616] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

References to studies excluded from this review

Agrafiotis 2018 {published data only}.

  • Agrafiotis DK, Lobanov VS, Farnum MA, Yang E, Ciervo J, Walega M, et al. Risk-based monitoring of clinical trials: an integrative approach . Clinical Therapeutics 2018; 40 :1204-12. [ PubMed ] [ Google Scholar ]

Andersen 2015 {published data only}

  • Andersen JR, Byrjalsen I, Bihlet A, Kalakou F, Hoeck HC, Hansen G, et al. Impact of source data verification on data quality in clinical trials: an empirical post hoc analysis of three phase 3 randomized clinical trials . British Journal of Clinical Pharmacology 2015; 79 :660-8. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Bailey 2017 {published data only}

  • Bailey L, Straw FK, George SE. Implementing a risk based monitoring approach in the early phase myeloma portfolio at Leeds CTRU . Trials 2017; 18 :220. [ Google Scholar ]

Bakobaki 2011 {published data only}

  • Bakobaki J, Rauchenberger M, Kaganson N, McCormack S, Stenning S, Meredith S. The potential for central monitoring techniques to replace on-site monitoring in clinical trials: a review of monitoring findings from an international multi-centre clinical trial . Clinical Trials 2011; 8 :454-5. [ PubMed ] [ Google Scholar ]

Bakobaki 2012 {published data only}

  • Bakobaki JM, Rauchenberger M, Joffe N, McCormack S, Stenning S, Meredith S. The potential for central monitoring techniques to replace on-site monitoring: findings from an international multi-centre clinical trial . Clinical Trials 2012; 9 :257-64. [ PubMed ] [ Google Scholar ]

Biglan 2016 {published data only}

  • Biglan K, Brocht A, Raca P. Implementing risk-based monitoring (RBM) in STEADY-PD III, a phase III multi-site clinical drug trial for Parkinson disease . Movement Disorders 2016; 31 ( 9 ):E10. [ Google Scholar ]

Collett 2019 {published data only}

  • Collett L, Gidman E, Rogers C. Automation of clinical trial statistical monitoring . Trials 2019; 20 ( Suppl 1 ):P-251. [ Google Scholar ]

Cragg 2019 {published data only}

  • Cragg WJ, Cafferty F, Diaz-Montana C, James EC, Joffe J, Mascarenhas M, et al. Early warnings and repayment plans: novel trial management methods for monitoring and managing data return rates in a multi-centre phase III randomised controlled trial with paper case report forms . Trials 2019; 20 :241. [DOI: 10.1186/s13063-019-3343-2] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Del Alamo 2018 {published data only}

  • Del Alamo M, Sanchez AI, Serrano ML, Aguilar M, Arcas M, Alvarez A, et al. Monitoring strategies for clinical trials in primary care: an independent clinical research perspective . Basic & Clinical Pharmacology & Toxicology 2018; 123 :25-6. [ Google Scholar ]

Diani 2017 {published data only}

  • Diani CA, Rock A, Moll P. An evaluation of the effectiveness of a risk-based monitoring approach implemented with clinical trials involving implantable cardiac medical devices . Clinical Trials 2017; 14 :575-83. [ PubMed ] [ Google Scholar ]

Diaz‐Montana 2019b {published data only}

  • Diaz-Montana C, Masters L, Love SB, Lensen S, Yorke-Edwards V, Sydes MR. Making performance metrics work: developing a triggered monitoring management system . Trials 2019; 20 ( Suppl 1 ):P-63. [ Google Scholar ]

Edwards 2014 {published data only}

  • Edwards P, Shakur H, Barnetson L, Prieto D, Evans S, Roberts I. Central and statistical data monitoring in the Clinical Randomisation of an Antifibrinolytic in Significant Haemorrhage (CRASH-2) trial . Clinical Trials 2014; 11 :336-43. [ PubMed ] [ Google Scholar ]

Elsa 2011 {published data only}

  • Elsa VM, Jemma HC, Martin L, Jane A. A key risk indicator approach to central statistical monitoring in multicentre clinical trials: method development in the context of an ongoing large-scale randomized trial . Trials 2011; 12 :A135. [ Google Scholar ]

Fu 2021 {published data only}

  • Fu ZY, Liu XH, Zhao SH, Yuan YN, Jiang M. A preliminary analysis of remote monitoring practice in clinical trials . Chinese Journal of New Drugs 2021; 30 ( 3 ):209-14. [ Google Scholar ]

Hatayama 2020 {published data only}

  • Hatayama T, Yasui S. Bayesian central statistical monitoring using finite mixture models in multicenter clinical trials . Contemporary Clinical Trials Communication 2020; 19 :100566. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Heels‐Ansdell 2010 {published data only}

  • Heels-Ansdell D, Walter S, Zytaruk N, Guyatt G, Crowther M, Warkentin T, et al. Central statistical monitoring of an international thromboprophylaxis trial . American Journal of Respiratory and Critical Care Medicine 2010; 181 :A6041. [ Google Scholar ]

Higa 2020 {published data only}

  • Higa A, Yagi M, Hayashi K, Kosako M, Akiho H. Risk-based monitoring approach to ensure the quality of clinical study data and enable effective monitoring . Therapeutic Innovation and Regulatory Science 2020; 54 ( 1 ):139-43. [ PubMed ] [ Google Scholar ]

Hirase 2016 {published data only}

  • Hirase K, Fukuda-Doi M, Okazaki S, Uotani M, Ohara H, Furukawa A, et al. Development of an efficient monitoring method for investigator-initiated clinical trials: lessons from the experience of ATACH-II trial . Japanese Pharmacology and Therapeutics 2016; 44 :s150-4. [ Google Scholar ]

Jones 2019 {published data only}

  • Jones L, Ogburn E, Yu LM, Begum N, Long A, Hobbs FD. On-site monitoring of primary outcomes is important in primary care clinical trials: Benefits of Aldosterone Receptor Antagonism in Chronic Kidney Disease (BARACK-D) trial – a case study . Trials 2019; 20 ( Suppl 1 ):P-272. [ Google Scholar ]

Jung 2020 {published data only}

  • Jung HY, Jeon Y, Seong SJ, Seo JJ, Choi JY, Cho JH, et al. Information and communication technology-based centralized monitoring system to increase adherence to immunosuppressive medication in kidney transplant recipients: a randomized controlled trial . Nephrology, Dialysis, Transplantation 2020; 35 ( Suppl 3 ):gfaa143.P1734. [DOI: 10.1093/ndt/gfaa143.P1734] [ CrossRef ] [ Google Scholar ]

Kim 2011 {published data only}

  • Kim J, Zhao W, Pauls K, Goddard T. Integration of site performance monitoring module in web-based CTMS for a global trial . Clinical Trials 2011; 8 :450. [ Google Scholar ]

Kim 2021 {published data only}

  • Kim S, Kim Y, Hong Y, Kim Y, Lim JS, Lee J, et al. Feasibility of a hybrid risk-adapted monitoring system in investigator-sponsored trials in cancer . Therapeutic Innovation and Regulatory Science 2021; 55 ( 1 ):180-9. [ PubMed ] [ Google Scholar ]

Lane 2013 {published data only}

  • Lane JA, Wade J, Down L, Bonnington S, Holding PN, Lennon T, et al. A Peer Review Intervention for Monitoring and Evaluating sites (PRIME) that improved randomized controlled trial conduct and performance . Journal of Clinical Epidemiology 2011; 64 :628-36. [ PubMed ] [ Google Scholar ]
  • Lane JA. Improving trial quality through a new site monitoring process: experience from the Protect Study . Clinical Trials 2008; 5 :404. [ Google Scholar ]
  • Lane JJ, Davis M, Down E, Macefield R, Neal D, Hamdy F, et al. Evaluation of source data verification in a multicentre cancer trial (PROTECT) . Trials 2013; 14 :83. [ Google Scholar ]

Lim 2017 {published data only}

  • Lim JY, Hackett M, Munoz-Venturelli P, Arima H, Middleton S, Olavarria VV, et al. Monitoring a large-scale international cluster stroke trial: lessons from head position in stroke trial . Stroke 2017; 48 :ATP371. [ Google Scholar ]

Lindley 2015 {published data only}

  • Lindley RI. Cost effective central monitoring of clinical trials . Neuroepidemiology 2015; 45 :303. [ Google Scholar ]

Miyamoto 2019 {published data only}

  • Miyamoto K, Nakamura K, Mizusawa J, Balincourt C, Fukuda H. Study risk assessment of Japan Clinical Oncology Group (JCOG) clinical trials using the European Organisation for Research and Treatment of Cancer (EORTC) study risk calculator . Japanese Journal of Clinical Oncology 2019; 49 ( 8 ):727-33. [ PubMed ] [ Google Scholar ]

Morales 2020 {published data only}

  • Morales A, Miropolsky L, Seagal I, Evans K, Romero H, Katz N. Case studies on the use of central statistical monitoring and interventions to optimize data quality in clinical trials . Osteoarthritis and Cartilage 2020; 28 :S460. [ Google Scholar ]

Murphy 2019 {published data only}

  • Murphy J, Durkina M, Jadav P, Kiru G. An assessment of feasibility and cost-effectiveness of remote monitoring on a multicentre observational study . Trials 2019; 20 ( Suppl 1 ):P-265. [ Google Scholar ]

Pei 2019 {published data only}

  • Pei XJ, Han L, Wang T. Enhancing the system of expedited reporting of safety data during clinical trials of drugs and strengthening the management of clinical trial risk monitoring . Chinese Journal of New Drugs 2019; 28 ( 17 ):2113-6. [ Google Scholar ]

Stock 2017 {published data only}

  • Stock E, Mi Z, Biswas K, Belitskaya-Levy I. Surveillance of clinical trial performance using centralized statistical monitoring . Trials 2017; 18 :200. [ Google Scholar ]

Sudo 2017 {published data only}

  • Sudo T, Sato A. Investigation of the factors affecting risk-based quality management of investigator-initiated investigational new-drug trials for unapproved anticancer drugs in Japan . Therapeutic Innovation and Regulatory Science 2017; 51 :589-96. [DOI: 10.1177/2168479017705155] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Thom 1996 {published data only}

  • Thom E, Das A, Mercer B, McNellis D. Clinical trial monitoring in the face of changing clinical practice. The NICHD MFMU Network . Controlled Clinical Trials 1996; 17 :58S-59S. [ Google Scholar ]

Tudur Smith 2012b {published data only}

  • Tudur Smith C, Stocken DD, Dunn J, Cox T, Ghaneh P, Cunningham D, et al. The value of source data verification in a cancer clinical trial . PloS One 2012; 7 ( 12 ):e51623. [ PMC free article ] [ PubMed ] [ Google Scholar ]

von Niederhäusern 2017 {published data only}

  • Niederhäusern B, Orleth A, Schädelin S, Rawi N, Velkopolszky M, Becherer C, et al. Generating evidence on a risk-based monitoring approach in the academic setting – lessons learned . BMC Medical Research Methodology 2017; 17 :26. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Yamada 2021 {published data only}

  • Yamada O, Chiu SW, Takata M, Abe M, Shoji M, Kyotani E, et al. Clinical trial monitoring effectiveness: remote risk-based monitoring versus on-site monitoring with 100% source data verification . Clinical Trials (London, England) 2021; 18 ( 2 ):158-67. [DOI: 10.1177/1740774520971254] [PMID: ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Yorke‐Edwards 2019 {published data only}

  • Yorke-Edwards VE, Diaz-Montana C, Mavridou K, Lensen S, Sydes MR, Love SB. Risk-based trial monitoring: site performance metrics across time . Trials 2019; 20 ( Suppl 1 ):P-33. [ Google Scholar ]

Zhao 2013 {published data only}

  • Zhao W. Risk-based monitoring approach in practice-combination of real-time central monitoring and on-site source document verification . Clinical Trials 2013; 10 :S4. [ Google Scholar ]

Additional references

Adamon study protocol 2008.

  • ADAMON study protocol. Study protocol ("Prospektive cluster-randomisierte Untersuchung studienspezifisch adaptierterStrategien für das Monitoring vor Ort in Kombination mit zusätzlichenqualitätssichernden Maßnahmen") . www.tmf-ev.de/ADAMON/Downloads.aspx (accessed prior to 19 August 2021).
  • Anon. Education section: Studies Within A Trial (SWAT) . Journal of Evidence-based Medicine 2012; 5 :44-5. [ PubMed ] [ Google Scholar ]

Baigent 2008

  • Baigent C, Harrell FE, Buyse M, Emberson JR, Altman DG. Ensuring trial validity by data quality assurance and diversification of monitoring methods . Clinical Trials 2008; 5 :49-55. [ PubMed ] [ Google Scholar ]

Bensaaud 2020

  • Bensaaud A, Gibson I, Jones J, Flaherty G, Sultan S, Tawfick W, et al. A telephone reminder to enhance adherence to interventions in cardiovascular randomized trials: a protocol for a Study Within A Trial (SWAT) . Journal of Evidence-based Medicine 2020; 13 ( 1 ):81-4. [DOI: 10.1111/jebm.12375] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Brosteanu 2009

Brosteanu 2017a.

  • Buyse M, Trotta L, Saad ED, Sakamoto J. Central statistical monitoring of investigator-led clinical trials in oncology . International Journal of Clinical Oncology 2020; 25 ( 7 ):1207-14. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Chene G. Evaluation of the efficacy and cost of two monitoring strategies for public clinical research. OPTIMON study: OPTImisation of MONitoring . ssl2.isped.u-bordeaux2.fr/OPTIMON/DOCS/OPTIMON%20-%20Protocol%20v12.0%20EN%202008-04-21.pdf (accessed 2 October 2019).

Cragg 2021a

  • Cragg WJ, Hurley C, Yorke-Edwards V, Stenning SP. Assessing the potential for prevention or earlier detection of on-site monitoring findings from randomised controlled trials: further analyses of findings from the prospective TEMPER triggered monitoring study . Clinical Trials 2021; 18 ( 1 ):115-26. [DOI: 10.1177/1740774520972650] [PMID: ] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Cragg 2021b

  • Cragg WJ, Hurley C, Yorke-Edwards V, Stenning SP. Dynamic methods for ongoing assessment of site-level risk in risk-based monitoring of clinical trials: a scoping review . Clinical Trials 2021; 18 ( 2 ):245-59. [DOI: 10.1177/1740774520976561] [PMID: ] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

DerSimonian 1986

  • DerSimonian R, Laird N. Meta-analysis in clinical trials . Controlled Clinical Trials 1986; 7 ( 3 ):177-88. [ PubMed ] [ Google Scholar ]

Diaz‐Montana 2019a

  • Duley L, Antman K, Arena J, Avezum A, Blumenthal M, Bosch J, et al. Specific barriers to the conduct of randomised trials . Clinical Trials 2008; 5 :40-8. [ PubMed ] [ Google Scholar ]
  • European Commission. Risk proportionate approaches in clinical trials. Recommendations of the expert group on clinical trials for the implementation of Regulation (EU) No 536/2014 on clinical trials on medicinal products for human use . ec.europa.eu/health/sites/default/files/files/eudralex/vol-10/2017_04_25_risk_proportionate_approaches_in_ct.pdf (accessed 28 July 2021).
  • European Medicines Agency. Reflection paper on risk based quality management in clinical trials, 2013 . ema.europa.eu/docs/en_GB/document_library/Scientific_guidelines/2013/11/WC500155491.pdf (accessed 2 July 2021).
  • European Medicines Agency. Procedure for reporting of GCP inspections requested by the Committee for Medicinal Products for Human Use, 2017 . ema.europa.eu/en/documents/regulatory-procedural-guideline/ins-gcp-4-procedure-reporting-good-clinical-practice-inspections-requested-chmp_en.pdf (accessed 2 July 2021).
  • EMA European Medicines Agency. Guidance on the management of clinical trial during the COVID-19 (coronavirus) pandemic . European Medicines Agency 2021; V4 ( https://ec.europa.eu/health/sites/default/files/files/eudralex/vol-10/guidanceclinicaltrials_covid19_en.pdf (accessed August 2021) ).

Embleton‐Thirsk 2019

  • Embleton-Thirsk A, Deane E, Townsend S, Farrelly L, Popoola B, Parker J, et al. Impact of retrospective data verification to prepare the ICON6 trial for use in a marketing authorization application . Clinical Trials 2019; 16 ( 5 ):502-11. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Effective Practice Organisation of Care. What study designs should be included in an EPOC review and what should they be called? EPOC resources for review authors, 2016 . epoc.cochrane.org/sites/epoc.cochrane.org/files/public/uploads/EPOC%20Study%20Designs%20About.pdf (accessed 2 July 2021).
  • Effective Practice Organisation of Care. Suggested risk of bias criteria for EPOC reviews. EPOC resources for review authors, 2017 . epoc.cochrane.org/sites/epoc.cochrane.org/files/public/uploads/Resources-for-authors2017/suggested_risk_of_bias_criteria_for_epoc_reviews.pdf (accessed 2 July 2021).
  • US Department of Health and Human Services Food and Drug Administration. Guidance for industry oversight of clinical investigations – a risk-based approach to monitoring . www.fda.gov/downloads/Drugs/Guidances/UCM269919.pdf (accessed 2 July 2021).
  • US Food and Drug Administration. FDA guidance on conduct of clinical trials of medical products during COVID-19 public health emergency: guidance for industry, investigators, and institutional review boards, 2020 . www.fda.gov/media/136238/download (accessed 19 August 2021).

Funning 2009

  • Funning S, Grahnén A, Eriksson K, Kettis-Linblad A. Quality assurance within the scope of good clinical practice (GCP) – what is the cost of GCP-related activities? A survey within the Swedish Association of the Pharmaceutical Industry (LIF)'s members . Quality Assurance Journal 2009; 12 ( 1 ):3-7. [DOI: 10.1002/qaj.433] [ CrossRef ] [ Google Scholar ]
  • Gaba P Bhatt DL. The COVID-19 pandemic: a catalyst to improve clinical trials . Nature Reviews. Cardiology 2020; 17 :673-5. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gough J, Wilson B, Zerola M. Defining a central monitoring capability: sharing the experience of TransCelerateBioPharmas approach, part 2 . Therapeutic Innovation and Regulatory Science 2016; 50 ( 1 ):8-14. [DOI: 10.1177/2168479015618696] [PMID: ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

GRADEpro GDT [Computer program]

  • GRADEpro GDT . Version Accessed August 2021. Hamilton (ON): McMaster University (developed by Evidence Prime Inc), 2020. Available at gradepro.org.

Grignolo 2011

  • Grignolo A. The Clinical Trials Transformation Initiative (CTTI) . Annali dell'Istituto Superiore di Sanita 2011; 47 :14-8. [DOI: 10.4415/ANN_11_01_04] [PMID: ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Guyatt 2013a

  • Guyatt GH, Oxman AD, Santesso N, Helfand M, Vist G, Kunz R, et al. GRADE guidelines: 12. Preparing summary of findings tables – binary outcomes . Journal of Clinical Epidemiology 2013; 66 :158-72. [ PubMed ] [ Google Scholar ]

Guyatt 2013b

  • Guyatt GH, Thorlund K, Oxman AD, Walter SD, Patrick D, Furukawa TA, et al. GRADE guidelines: 13. Preparing summary of findings tables and evidence profiles – continuous outcomes . Journal of Clinical Epidemiology 2013; 66 :173-83. [ PubMed ] [ Google Scholar ]
  • Hearn J, Sullivan R. The impact of the 'Clinical Trials' directive on the cost and conduct of non-commercial cancer trials in the UK . European Journal of Cancer 2007; 43 :8-13. [ PubMed ] [ Google Scholar ]

Higgins 2016

  • Higgins JP, Lasserson T, Chandler J, Tovey D, Churchill R. Methodological Expectations of Cochrane Intervention Reviews . London (UK): Cochrane, 2016. [ Google Scholar ]

Higgins 2020

  • Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.1 (updated September 2020). Cochrane, 2020 . Available from handbook: training.cochrane.org/handbook/archive/v6.1 .

Horsley 2011

  • Horsley T, Dingwall O, Sampson M. Checking reference lists to find additional studies for systematic reviews . Cochrane Database of Systematic Reviews 2011, Issue 8 . Art. No: MR000026. [DOI: 10.1002/14651858.MR000026.pub2] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Houghton 2020

  • Houghton C, Dowling M, Meskell P, Hunter A, Gardner H, Conway A, et al. Factors that impact on recruitment to randomised trials in health care: a qualitative evidence synthesis . Cochrane Database of Systematic Reviews 2020, Issue 10 . Art. No: MR000045. [DOI: 10.1002/14651858.MR000045.pub2] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Hullsiek 2015

  • Hullsiek KH, Kagan JM, Engen N, Grarup J, Hudson F, Denning ET, et al. Investigating the efficacy of clinical trial monitoring strategies: design and implementation of the cluster randomized START monitoring substudy . Therapeutic Innovation and Regulatory Science 2015; 49 ( 2 ):225-33. [DOI: 10.1177/2168479014555912] [PMID: ] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Hurley 2016

  • Hurley C, Shiely F, Power J, Clarke M, Eustace JA, Flanagan E, et al. Risk based monitoring (RBM) tools for clinical trials: a systematic review . Contemporary Clinical Trials 2016; 51 :15-27. [ PubMed ] [ Google Scholar ]
  • International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use. ICH Harmonised Tripartite Guideline: guideline for good clinical practice E6 (R2) . www.ema.europa.eu/en/documents/scientific-guideline/ich-e-6-r2-guideline-good-clinical-practice-step-5_en.pdf (accessed 28 July 2021).
  • International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use. Integrated Addendum to ICH E6(R1): guideline for good clinical practice E6R(2) . database.ich.org/sites/default/files/E6_R2_Addendum.pdf (accessed 2 July 2021).

Izmailova 2020

  • Izmailova ES, Ellis R, Benko C. Remote monitoring in clinical trials during the COVID-19 pandemic . Clinical and Translational Science 2020; 13 ( 5 ):838-41. [DOI: 10.1111/cts.12834] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Journot 2011

Journot 2013, journot 2015.

  • Journot V. OPTIMON – first results of the French trial on optimisation of monitoring . ssl2.isped.u-bordeaux2.fr/OPTIMON/docs/Communications/2015-Montpellier/OPTIMON%20-%20EpiClin%20Montpellier%202015-05-20%20EN.pdf (accessed 28 July 2021).

Landray 2012

  • Landray MJ, Grandinetti C, Kramer JM, Morrison BW, Ball L, Sherman RE. Clinical trials: rethinking how we ensure quality . Drug Information Journal 2012; 46 :657-60. [DOI: 10.1177/0092861512464372] [ CrossRef ] [ Google Scholar ]

Lefebvre 2011

  • Lefebvre C, Manheimer E, Glanville J. Chapter 6: Searching for studies. In: Higgins JP, Green S, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011). The Cochrane Collaboration, 2011 . Available from training.cochrane.org/handbook/archive/v5.1/ .
  • Love SB, Armstrong E, Bayliss C, Boulter M, Fox L, Grumett J, et al. Monitoring advances including consent: learning from COVID-19 trials and other trials running in UKCRC registered clinical trials units during the pandemic . Trials 2021; 22 :279. [ PMC free article ] [ PubMed ] [ Google Scholar ]

McDermott 2020

  • McDermott MM, Newman AB. Preserving clinical trial integrity during the coronavirus pandemic . JAMA 2020; 323 ( 21 ):2135-6. [ PubMed ] [ Google Scholar ]

McGowan 2016

  • McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 guideline statement . Journal of Clinical Epidemiology 2016; 75 :40-6. [DOI: 10.1016/j.jclinepi.2016.01.021] [PMID: ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Meredith 2011

  • Meredith S, Ward M, Booth G, Fisher A, Gamble C, House H, et al. Risk-adapted approaches to the management of clinical trials: guidance from the Department of Health (DH) / Medical Research Council (MRC)/Medicines and Healthcare Products Regulatory Agency (MHRA) Clinical Trials Working Group . Trials 2011; 12 :A39. [ Google Scholar ]
  • Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement . Journal of Clinical Epidemiology 2009; 62 :1006-12. [ PubMed ] [ Google Scholar ]

Morrison 2011

  • Morrison BW, Cochran CJ, White JG, Harley J, Kleppinger CF, Liu A, et al. Monitoring the quality of conduct of clinical trials: a survey of current practices . Clinical Trials 2011; 8 ( 3 ):342-9. [ PubMed ] [ Google Scholar ]
  • Organisation for Economic Co-operation and Development. OECD recommendation on the governance of clinical trials . oecd.org/sti/inno/oecdrecommendationonthegovernanceofclinicaltrials.htm (accessed 2 July 2021).
  • Olsen R, Bihlet AR, Kalakou F. The impact of clinical trial monitoring approaches on data integrity and cost? A review of current literature . European Journal of Clinical Pharmacology 2016; 72 :399-412. [ PubMed ] [ Google Scholar ]

OPTIMON study protocol 2008

  • OPTIMON study protocol. Study protocol: evaluation of the efficacy and cost of two monitoring strategies for public clinical research. OPTIMON study: OPTImisation of MONitoring . ssl2.isped.u-bordeaux2.fr/OPTIMON/DOCS/OPTIMON%20-%20Protocol%20v12.0%20EN%202008-04-21.pdf (accessed prior to 19 August 2021).
  • Oxman AD, Guyatt GH. A consumer's guide to subgroup analyses . Annals of Internal Medicine 1992; 116 :78-84. [ PubMed ] [ Google Scholar ]

Review Manager 2014 [Computer program]

  • Review Manager 5 (RevMan 5) . Version 5.3. Copenhagen: Nordic Cochrane Centre, The Cochrane Collaboration, 2014.
  • Monitoring Platform of the Swiss Clinical Trial Organisation (SCTO) F dated. Fact sheet: central data monitoring in clinical trials? V 1.0 . www.scto.ch/monitoring (accessed 2 July 2021).

Shiely 2021

  • Shiely F, Foley J, Stone A, Cobbe E, Browne S, Murphy E, et al. Managing clinical trials during COVID-19: experience from a clinical research facility . Trials 2021; 22 :62. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Stenning 2018a

  • Sun X, Briel M, Walter SD, Guyatt GH. Is a subgroup effect believable? Updating criteria to evaluate the credibility of subgroup analyses . BMJ 2010; 340 :c117. [ PubMed ] [ Google Scholar ]

Tantsyura 2015

  • Tantsyura V, Dunn IM, Fendt K. Risk-based monitoring: a closer statistical look at source document verification, queries, study size effects, and data quality . Therapeutic Innovation and Regulatory Science 2015; 49 :903-10. [ PubMed ] [ Google Scholar ]

Thomas 2010 [Computer program]

  • EPPI-Reviewer: software for research synthesis. EPPI-Centre Software . Thomas J, Brunton J, Graziosi S, Version 4.0. London (UK): Social Science Research Unit, Institute of Education, University of London, 2010.

TransCelerate BioPharma Inc 2014

  • TransCelerateBiopharmaInc. Risk-based monitoring methodology . www.transceleratebiopharmainc.com/wp-content/uploads/2016/01/TransCelerate-RBM-Position-Paper-FINAL-30MAY2013.pdf (accessed 28 July 2021).

Treweek 2018a

  • Treweek S, Bevan S, Bower P, Campbell M, Christie J, Clarke M, et al. Trial Forge Guidance 1: what is a Study Within A Trial (SWAT)? Trials 2018; 19 :139. [DOI: 10.1186/s13063-018-2535-5] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Treweek 2018b

  • Treweek S, Pitkethly M, Cook J, Fraser C, Mitchell E, Sullivan F, et al. Strategies to improve recruitment to randomised trials . Cochrane Database of Systematic Reviews 2018, Issue 2 . Art. No: MR000013. [DOI: 10.1002/14651858.MR000013.pub6] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Tudur Smith 2012a

  • Tudur Smith C, Stocken DD, Dunn J, Cox T, Ghaneh P, Cunningham D, et al. The value of source data verification in a cancer clinical trial . PloS One 2012; 7 :e51623. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Tudur Smith 2014

  • Tudur Smith C, Williamson P, Jones A, Smyth A, Hewer SL, Gamble C. Risk-proportionate clinical trial monitoring: an example approach from a non-commercial trials unit . Trials 2014; 15 :127. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Valdés‐Márquez 2011

  • Valdés-Márquez E, Hopewell CJ, Landray M, Armitage J. A key risk indicator approach to central statistical monitoring in multicentre clinical trials: method development in the context of an ongoing large-scale randomized trial . Trials 2011; 12 ( Suppl 1 ):A135. [ Google Scholar ]
  • Venet D, Doffagne E, Burzykowski T, Beckers F, Tellier Y, Genevois-Marlin E, et al. A statistical approach to central monitoring of data quality in clinical trials . Clinical Trials 2012; 9 :705-13. [ PubMed ] [ Google Scholar ]

von Niederhausern 2017

  • Niederhausern B, Orleth A, Schadelin S, Rawi N, Velkopolszky M, Becherer C, et al. Generating evidence on a risk-based monitoring approach in the academic setting – lessons learned . BMC Medical Research Methodology 2017; 17 :26. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Wyman Engen 2020

  • Wyman Engen N, Huppler Hullsiek K, Belloso WH, Finley E, Hudson F, Denning E, et al. A randomized evaluation of on-site monitoring nested in a multinational randomized trial . Clinical Trials 2020; 17 ( 1 ):3-14. [DOI: 10.1177/1740774519881616] [PMID: ] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Young T, Hopewell S. Methods for obtaining unpublished data . Cochrane Database of Systematic Reviews 2011, Issue 11 . Art. No: MR000027. [DOI: 10.1002/14651858.MR000027.pub2] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

References to other published versions of this review

Klatte 2019.

  • Klatte K, Pauli-Magnus C, Love S, Sydes M, Benkert P, Bruni N, et al. Monitoring strategies for clinical intervention studies . Cochrane Database of Systematic Reviews 2019, Issue 12 . Art. No: MR000051. [DOI: 10.1002/14651858.MR000051] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

visit types in clinical trials

Study record managers: refer to the Data Element Definitions if submitting registration or results information.

Search for terms

ClinicalTrials.gov

  • Advanced Search
  • See Studies by Topic
  • See Studies on Map
  • How to Search
  • How to Use Search Results
  • How to Find Results of Studies
  • How to Read a Study Record

About Studies Menu

  • Learn About Studies
  • Other Sites About Studies
  • Glossary of Common Site Terms

Submit Studies Menu

  • Submit Studies to ClinicalTrials.gov PRS
  • Why Should I Register and Submit Results?
  • FDAAA 801 and the Final Rule
  • How to Apply for a PRS Account
  • How to Register Your Study
  • How to Edit Your Study Record
  • How to Submit Your Results
  • Frequently Asked Questions
  • Support Materials
  • Training Materials

Resources Menu

  • Selected Publications
  • Clinical Alerts and Advisories
  • Trends, Charts, and Maps
  • Downloading Content for Analysis

About Site Menu

  • ClinicalTrials.gov Background
  • About the Results Database
  • History, Policies, and Laws
  • ClinicalTrials.gov Modernization
  • Media/Press Resources
  • Linking to This Site
  • Terms and Conditions

About Studies

Learn About Clinical Studies

  • Other Sites About Clinical Studies

Clinical Trials

Observational studies, who conducts clinical studies, where are clinical studies conducted, how long do clinical studies last, reasons for conducting clinical studies, who can participate in a clinical study, how are participants protected, relationship to usual health care, considerations for participation, questions to ask, what is a clinical study.

A clinical study involves research using human volunteers (also called participants) that is intended to add to medical knowledge. There are two main types of clinical studies: clinical trials (also called interventional studies) and observational studies. ClinicalTrials.gov includes both interventional and observational studies.

In a clinical trial, participants receive specific interventions according to the research plan or protocol created by the investigators. These interventions may be medical products, such as drugs or devices; procedures; or changes to participants' behavior, such as diet. Clinical trials may compare a new medical approach to a standard one that is already available, to a placebo that contains no active ingredients, or to no intervention. Some clinical trials compare interventions that are already available to each other. When a new product or approach is being studied, it is not usually known whether it will be helpful, harmful, or no different than available alternatives (including no intervention). The investigators try to determine the safety and efficacy of the intervention by measuring certain outcomes in the participants. For example, investigators may give a drug or treatment to participants who have high blood pressure to see whether their blood pressure decreases.

Clinical trials used in drug development are sometimes described by phase. These phases are defined by the Food and Drug Administration (FDA).

Some people who are not eligible to participate in a clinical trial may be able to get experimental drugs or devices outside of a clinical trial through expanded access. See more information on expanded access from the FDA .

In an observational study, investigators assess health outcomes in groups of participants according to a research plan or protocol. Participants may receive interventions (which can include medical products such as drugs or devices) or procedures as part of their routine medical care, but participants are not assigned to specific interventions by the investigator (as in a clinical trial). For example, investigators may observe a group of older adults to learn more about the effects of different lifestyles on cardiac health.

Every clinical study is led by a principal investigator, who is often a medical doctor. Clinical studies also have a research team that may include doctors, nurses, social workers, and other health care professionals.

Clinical studies can be sponsored, or funded, by pharmaceutical companies, academic medical centers, voluntary groups, and other organizations, in addition to Federal agencies such as the National Institutes of Health, the U.S. Department of Defense, and the U.S. Department of Veterans Affairs. Doctors, other health care providers, and other individuals can also sponsor clinical research.

Clinical studies can take place in many locations, including hospitals, universities, doctors' offices, and community clinics. The location depends on who is conducting the study.

The length of a clinical study varies, depending on what is being studied. Participants are told how long the study will last before they enroll.

In general, clinical studies are designed to add to medical knowledge related to the treatment, diagnosis, and prevention of diseases or conditions. Some common reasons for conducting clinical studies include:

  • Evaluating one or more interventions (for example, drugs, medical devices, approaches to surgery or radiation therapy) for treating a disease, syndrome, or condition
  • Finding ways to prevent the initial development or recurrence of a disease or condition. These can include medicines, vaccines, or lifestyle changes, among other approaches.
  • Evaluating one or more interventions aimed at identifying or diagnosing a particular disease or condition
  • Examining methods for identifying a condition or the risk factors for that condition
  • Exploring and measuring ways to improve the comfort and quality of life through supportive care for people with a chronic illness

Participating in Clinical Studies

A clinical study is conducted according to a research plan known as the protocol. The protocol is designed to answer specific research questions and safeguard the health of participants. It contains the following information:

  • The reason for conducting the study
  • Who may participate in the study (the eligibility criteria)
  • The number of participants needed
  • The schedule of tests, procedures, or drugs and their dosages
  • The length of the study
  • What information will be gathered about the participants

Clinical studies have standards outlining who can participate. These standards are called eligibility criteria and are listed in the protocol. Some research studies seek participants who have the illnesses or conditions that will be studied, other studies are looking for healthy participants, and some studies are limited to a predetermined group of people who are asked by researchers to enroll.

Eligibility. The factors that allow someone to participate in a clinical study are called inclusion criteria, and the factors that disqualify someone from participating are called exclusion criteria. They are based on characteristics such as age, gender, the type and stage of a disease, previous treatment history, and other medical conditions.

Informed consent is a process used by researchers to provide potential and enrolled participants with information about a clinical study. This information helps people decide whether they want to enroll or continue to participate in the study. The informed consent process is intended to protect participants and should provide enough information for a person to understand the risks of, potential benefits of, and alternatives to the study. In addition to the informed consent document, the process may involve recruitment materials, verbal instructions, question-and-answer sessions, and activities to measure participant understanding. In general, a person must sign an informed consent document before joining a study to show that he or she was given information on the risks, potential benefits, and alternatives and that he or she understands it. Signing the document and providing consent is not a contract. Participants may withdraw from a study at any time, even if the study is not over. See the Questions to Ask section on this page for questions to ask a health care provider or researcher about participating in a clinical study.

Institutional review boards. Each federally supported or conducted clinical study and each study of a drug, biological product, or medical device regulated by FDA must be reviewed, approved, and monitored by an institutional review board (IRB). An IRB is made up of doctors, researchers, and members of the community. Its role is to make sure that the study is ethical and that the rights and welfare of participants are protected. This includes making sure that research risks are minimized and are reasonable in relation to any potential benefits, among other responsibilities. The IRB also reviews the informed consent document.

In addition to being monitored by an IRB, some clinical studies are also monitored by data monitoring committees (also called data safety and monitoring boards).

Various Federal agencies, including the Office of Human Subjects Research Protection and FDA, have the authority to determine whether sponsors of certain clinical studies are adequately protecting research participants.

Typically, participants continue to see their usual health care providers while enrolled in a clinical study. While most clinical studies provide participants with medical products or interventions related to the illness or condition being studied, they do not provide extended or complete health care. By having his or her usual health care provider work with the research team, a participant can make sure that the study protocol will not conflict with other medications or treatments that he or she receives.

Participating in a clinical study contributes to medical knowledge. The results of these studies can make a difference in the care of future patients by providing information about the benefits and risks of therapeutic, preventative, or diagnostic products or interventions.

Clinical trials provide the basis for the development and marketing of new drugs, biological products, and medical devices. Sometimes, the safety and the effectiveness of the experimental approach or use may not be fully known at the time of the trial. Some trials may provide participants with the prospect of receiving direct medical benefits, while others do not. Most trials involve some risk of harm or injury to the participant, although it may not be greater than the risks related to routine medical care or disease progression. (For trials approved by IRBs, the IRB has decided that the risks of participation have been minimized and are reasonable in relation to anticipated benefits.) Many trials require participants to undergo additional procedures, tests, and assessments based on the study protocol. These requirements will be described in the informed consent document. A potential participant should also discuss these issues with members of the research team and with his or her usual health care provider.

Anyone interested in participating in a clinical study should know as much as possible about the study and feel comfortable asking the research team questions about the study, the related procedures, and any expenses. The following questions may be helpful during such a discussion. Answers to some of these questions are provided in the informed consent document. Many of the questions are specific to clinical trials, but some also apply to observational studies.

  • What is being studied?
  • Why do researchers believe the intervention being tested might be effective? Why might it not be effective? Has it been tested before?
  • What are the possible interventions that I might receive during the trial?
  • How will it be determined which interventions I receive (for example, by chance)?
  • Who will know which intervention I receive during the trial? Will I know? Will members of the research team know?
  • How do the possible risks, side effects, and benefits of this trial compare with those of my current treatment?
  • What will I have to do?
  • What tests and procedures are involved?
  • How often will I have to visit the hospital or clinic?
  • Will hospitalization be required?
  • How long will the study last?
  • Who will pay for my participation?
  • Will I be reimbursed for other expenses?
  • What type of long-term follow-up care is part of this trial?
  • If I benefit from the intervention, will I be allowed to continue receiving it after the trial ends?
  • Will results of the study be provided to me?
  • Who will oversee my medical care while I am participating in the trial?
  • What are my options if I am injured during the study?
  • For Patients and Families
  • For Researchers
  • For Study Record Managers
  • Customer Support
  • Accessibility
  • Viewers and Players
  • Freedom of Information Act
  • HHS Vulnerability Disclosure
  • U.S. National Library of Medicine
  • U.S. National Institutes of Health
  • U.S. Department of Health and Human Services
  • Conference Coverage
  • CRO/Sponsor
  • 2023 Salary Survey

visit types in clinical trials

  • Publications
  • Conferences

Tools for Resolving Monitoring Visit Findings

Applied Clinical Trials

In this article, Moe Alsumidaie will discuss how my study teams manage monitoring reports and offer a tracking tool to assist sites with the process.

Monitoring visits are daunting not only for monitors but also for study sites. When monitors review data for a new study at a site they have recently become acquainted with, they spend a significant amount of effort investigating common study-related issues, how the site operates and uncovering many findings regarding data quality and site operations. Initially, monitoring reports tend to be very long, issuing numerous actions to sites for resolution. Unfortunately, study sites that are juggling between several studies sometimes find it challenging to keep track of monitoring actions and resolving them. In this article, I will discuss how my study teams manage monitoring reports and offer a tracking tool to assist sites with the process.

Conduct Internal QCs Before Monitoring Visits

visit types in clinical trials

Patient and Site Personnel Perceptions of Retail Pharmacy Involvement in Clinical Research

Despite industry-wide excitement over the involvement of retail pharmacies in clinical research, there is little information currently available on how retail pharmacies are perceived by investigative sites and patients.

The Future of Personalized Medicine Hinges on Revolutionizing Business Models

The Future of Personalized Medicine Hinges on Revolutionizing Business Models

The era of big pharma as product-first companies must end, as services become the larger priority.

© photon_photo - © photon_photo - stock.adobe.com.

How to Improve Quality of Study Delivery? Work with Technology-enabled Sites

Practical ways CROs and sponsors can support sites and improve quality.

Connecting Institutions and Global Sites: Q&A With Sofia Baig, President of Clinical Solutions at Precision for Medicine

Connecting Institutions and Global Sites: Q&A With Sofia Baig, President of Clinical Solutions at Precision for Medicine

Baig discusses the challenges she is currently seeing with connecting sites as well as how they can keep pace with new technologies.

Image credit: Yuliia | stock.adobe.com

Microbiology is Vital to Compliance to Annex 1 and Risk-Based Regulation

The revision of Annex 1 clearly calls out the use of quality risk management to identify potential risks to quality and the implementation of a contamination control strategy.

Is Trust the Secret Ingredient for Digital Product Success?

Is Trust the Secret Ingredient for Digital Product Success?

To rebuild trust, companies should focus on transparent, neutral content incorporating medical expertise and regularly seeking HCP feedback.

2 Commerce Drive Cranbury, NJ 08512

609-716-7777

visit types in clinical trials

UW Health recently identified and investigated a security incident regarding select patients’ information. Learn more

  • Find a Doctor
  • Conditions & Services
  • Locations & Clinics
  • Patients & Families
  • Refer a Patient
  • Refill a prescription
  • Price transparency
  • Obtain medical records
  • Clinical Trials
  • Order flowers and gifts
  • Volunteering
  • Send a greeting card
  • Make a donation
  • Find a class or support group
  • Priority OrthoCare
  • Emergency & Urgent care

April 16, 2024

Your guide to understanding phases of cancer clinical trials

Cancer clinic medical staff reviewing images

Clinical trials can be an important resource for patients at every stage of their cancer diagnosis, but understanding the scientific terms, study protocols and process can be intimidating to many patients.

That’s why UW Health | Carbone Cancer Center prioritizes educational resources, including a team of Clinical Trial Nurse Navigators and the new uwhealth.org/cancertrials webpage, to ensure patients have accurate information about how clinical trials work and how their care and safety will be prioritized at every step.

“Having a good, educational source of information about clinical trials is so important because there are a lot of myths and misconceptions out there,” said Sarah Kotila, Clinical Trials Navigation Team Manager at Carbone Cancer Center.

One of the most common questions patients have is what the different phases of a clinical trial mean. Read more about each step in the important process of approving new cancer treatments.

In this initial step of a clinical trial, the research staff is looking at the safety and appropriate dosage of giving a new treatment. They also watch for side effects. The number of patients enrolled on phase I trials are small — there are typically fewer than 50 patients involved.

Kotila often hears patients ask if phase 1 trials are safe. She explains that any cancer treatment comes with potential risks and benefits, whether it’s established clinical care or treatments being tested in clinical trials. With clinical trials, a team of experts is closely monitoring the patient’s care and frequently checking in to see how the patient is doing and feeling.

“Patients should know there can be risks and benefits to any cancer treatment they receive, and this is the same with clinical trials,” she said.

Kotila adds that to get Food and Drug Administration approval to start a clinical trial there has been considerable lab and preclinical research already done to prepare their research for this important next step.

Once a study has cleared its phase I benchmarks, it can move into phase 2. Researchers at this stage continue to monitor safety and are focused on whether the new treatment method is effective for certain diagnoses.

“In phase II, they’re having more people enroll to see if the treatment is effective in specific types of cancer” she said. “It’s still a smaller number, usually less than 100 people.”

Because they are measuring whether the approach is effective, phase 2 typically lasts several months to two years to measure changes over time. Researchers also continue to monitor for side effects that were not seen in phase I with the smaller group.

If the treatment proves to be effective for certain types of cancer, it can advance to phase 3 status.

This is the final step of testing before a treatment can be approved by the FDA for standard clinical use. The new treatment is being compared directly to existing standard of care treatments to determine if the new treatment is as good or better than our current treatments. The patient pool is at least several hundred people to get a widespread view of patient effects and validate the findings.

Placebos, which are inactive substances designed to look like study medication, can be used to randomize patient effects in the study and help preserve integrity of results when evaluating a new treatment. Placebos are rarely used in cancer treatment clinical trials. Kotila reassures patients who are afraid of placebos that they will still get treatment when needed.

“It would be unethical to not treat a cancer patient that needs treatment. Patients in these phase III randomized drug trials may get standard of care plus a placebo or standard of care plus the new treatment being tested, but they will always be treated,” she said.

FDA approval

So what happens if you’re part of a clinical trial and the study medication receives FDA approval? Dr. Mark Burkard , a physician-scientist who leads several clinical trials at Carbone, said the study’s sponsor can choose to stop the trial or continue studying long-term effects.

“In most cases, the sponsor will continue the trial to collect additional information about how the drug works, allowing the patient to choose (if they continue),” Burkard said. “Most patients choose to continue the trial. Some choose to stop the trial, if for example, they live far away and can get the same medicine from an oncologist close to home.”

Learning more

Kotila said patients who are considering clinical trials and have more questions can contact the Clinical Trials Navigation Team at (608) 262-0439 or [email protected] . If a patient would like to schedule an appointment or be seen for a clinical trial at UW Carbone, please contact our intake team at (608) 262-5223 .

NIH, National Cancer Institute, Division of Cancer Treatment and Diagnosis (DCTD)

  • Overview For Providers
  • Overview For Patients
  • Publications
  • Clinical Trials List
  • Clinical Trials Questions and Answers
  • Clinical Trials Contacts
  • Process Overview
  • Records Requested
  • Patient Self-Referral Form
  • Eligibility Criteria
  • Start Visit
  • Transportation
  • Where to Stay

Screening Visit

  • Nurses Contact Information
  • Where To Send Updated Labs/Admission Information
  • Reimbursement/Travel Information
  • Fellowship Opportunities
  • Contacts for Prospective Patients
  • Contacts for Providers
  • Contacts for Current Patients

A screening visit is a potential participant's chance to meet the DTC team and discuss their options, questions, and concerns with the study team. If you would like to participate in a study, you may also be asked to sign a Screening Consent form and complete screening/baseline examinations. These examinations, tests, or procedures are part of your regular cancer care and should be done by your health care team even if you do not join a study. If you have had them recently, they may not need to be repeated. This will be up to you and your study doctor. These tests/procedures may include:

  • Complete medical history , including prior hormone use
  • Physical examination , including height, weight, blood pressure, pulse, and temperature
  • Standard blood tests (requiring about 1 tablespoon of blood total), which include measurement of your white blood cells, red blood cells, platelets, blood sugar, and electrolytes, and how your liver and kidneys work
  • Pregnancy test to check for pregnancy in women who are able to become pregnant
  • EKG to check your heart
  • CT scans of your chest, abdomen, and pelvis to measure your tumor(s); other imaging tests may be done as needed
  • Eye Exam , including physical eye examination, history, and vision test
  • Contact DTC
  • Accessibility
  • Disclaimer Policy
  • HHS Vulnerability Disclosure
  • U.S. Department of Health and Human Services
  • National Institutes of Health
  • National Cancer Institute
  • Open access
  • Published: 28 February 2023

Complex and alternate consent pathways in clinical trials: methodological and ethical challenges encountered by underserved groups and a call to action

  • Amy M. Russell 1   na1 ,
  • Victoria Shepherd   ORCID: orcid.org/0000-0002-7687-0817 2   na1 ,
  • Kerry Woolfall 3 ,
  • Bridget Young 3 ,
  • Katie Gillies 4 ,
  • Anna Volkmer 5 ,
  • Mark Jayes 6 ,
  • Richard Huxtable 7 ,
  • Alexander Perkins 8 ,
  • Nurulamin M. Noor 9 ,
  • Beverley Nickolls 10 &
  • Julia Wade 11  

Trials volume  24 , Article number:  151 ( 2023 ) Cite this article

2886 Accesses

4 Citations

37 Altmetric

Metrics details

Informed consent is considered a fundamental requirement for participation in trials, yet obtaining consent is challenging in a number of populations and settings. This may be due to participants having communication or other disabilities, their capacity to consent fluctuates or they lack capacity, or in emergency situations where their medical condition or the urgent nature of the treatment precludes seeking consent from either the participant or a representative. These challenges, and the subsequent complexity of designing and conducting trials where alternative consent pathways are required, contribute to these populations being underserved in research. Recognising and addressing these challenges is essential to support trials involving these populations and ensure that they have an equitable opportunity to participate in, and benefit from, research. Given the complex nature of these challenges, which are encountered by both adults and children, a cross-disciplinary approach is required.

A UK-wide collaboration, a sub-group of the Trial Conduct Working Group in the MRC-NIHR Trial Methodology Research Partnership, was formed to collectively address these challenges. Members are drawn from disciplines including bioethics, qualitative research, trials methodology, healthcare professions, and social sciences. This commentary draws on our collective expertise to identify key populations where particular methodological and ethical challenges around consent are encountered, articulate the specific issues arising in each population, summarise ongoing and completed research, and identify targets for future research. Key populations include people with communication or other disabilities, people whose capacity to consent fluctuates, adults who lack the capacity to consent, and adults and children in emergency and urgent care settings. Work is ongoing by the sub-group to create a database of resources, to update NIHR guidance, and to develop proposals to address identified research gaps.

Collaboration across disciplines, sectors, organisations, and countries is essential if the ethical and methodological challenges surrounding trials involving complex and alternate consent pathways are to be addressed. Explicating these challenges, sharing resources, and identifying gaps for future research is an essential first step. We hope that doing so will serve as a call to action for others seeking ways to address the current consent-based exclusion of underserved populations from trials.

Peer Review reports

Informed consent is seen as a cornerstone in the ethical conduct of clinical trials. However, in populations or settings where there are challenges to seeking or providing consent, alternative consent arrangements may be required. These challenges may arise due to communication barriers, where a participant’s capacity to provide consent fluctuates over time, where capacity is lost during a trial, or they are deemed to lack the capacity to consent at the outset. These challenges may be particularly pronounced in emergency settings where the urgent nature of the condition and the need for immediate action preclude the ability to seek prior consent for either adults or children. Populations where consent may pose a challenge have historically been excluded from trials and are recognised as being underserved by research as a result [ 1 ]. For example, one in three patients with hip fractures have a concomitant cognitive impairment, yet eight out of ten hip fracture trials exclude this population despite evidence that those with cognitive impairment are likely to experience different outcomes [ 2 ]. Even trials in conditions associated with cognitive impairment frequently exclude people with impaired capacity to consent [ 3 ]. This exclusion of relevant subgroups of patients risks presenting biased estimates of treatment effects [ 4 , 5 ] and limits the ability to provide evidence-based care for these groups.

For many of these populations, research inequity contributes to the health disparities that they already encounter [ 6 ]. For example, adults with intellectual disabilities die on average 10–15 years earlier than those without intellectual disabilities in the UK and the USA [ 7 , 8 ], yet 90% of clinical trials are designed in a way that automatically excludes them from participating [ 9 ]. The importance of widening opportunities for the participation of underserved populations in research has received recognition both in the UK and beyond, resulting in national and international initiatives to improve inclusivity and diversity in the design, conduct, and reporting of clinical trials [ 1 , 10 , 11 , 12 ]. Research funders increasingly require researchers to address issues around inclusivity and representativeness in their funding applications [ 13 ]. However, the challenges of conducting trials where consent is complex, and where consent-based exclusion denies populations the opportunity to participate in and benefit from research, have received less attention [ 14 ].

The ethical and methodological issues surrounding trials involving complex and alternative consent pathways have led to the formation of a new UK multi-institutional collaboration to collectively address some of these challenges. This collaboration forms a sub-group of the Trial Conduct Working Group in the MRC-NIHR Trial Methodology Research Partnership, consisting of members from disciplines including trials methodology, qualitative research, healthcare, bioethics, and social sciences. This paper summarises and discusses contexts where researchers may encounter particular methodological and ethical challenges around consent. The focus is on trials where the process of consent is challenging and alternative consent pathways are required, rather than where the informational content required for consent to be valid is complex [ 15 ], or where the trial design is complex such as a multistage randomised controlled trial [ 16 ].

Drawing on our experiences as an interdisciplinary group of researchers with an interest in complex and alternate consent pathways in trials, we will focus on key populations where consent-based challenges contribute to their exclusion: adults with communication or other disabilities [ 17 ], adults who lack the capacity to consent [ 18 ], adults whose capacity to consent fluctuates or is lost during a trial [ 19 ], and adults and children requiring emergency and urgent care [ 20 ]. The question of alternative consent pathways for children in non-emergency research will not be addressed in this article as it requires specific attention [ 21 ]. For each population, we articulate the challenges around inclusion in trials, summarise current evidence and ongoing work, and identify areas for future research. We hope that this will serve as a cri de cœur for others seeking ways to address the consent-based exclusion of underserved populations from trials.

Trials involving adults with communication, hearing, and sight disabilities

Despite the fact that the majority of legislation delineating consent processes urges professionals to make adjustments for people with communication, hearing, and visual impairments, they may be excluded from research simply due to the fact that obtaining informed consent is more challenging [ 22 ]. Communication disabilities can comprise a range of difficulties that impact a person’s ability to understand spoken or written information (sounds, words, or sentences) and express themselves verbally or non-verbally (articulate sounds/letters, select words, or use relevant grammar and sentence forms) in spoken, written, or picture form. Difficulties in accessing and comprehension of information are one of the most common barriers in consent scenarios across several diagnoses including dementia [ 23 ], stroke [ 24 ], and brain injury [ 25 ], as well as developmental disorders such as autism and learning/intellectual disabilities [ 26 ]. Other difficulties that can impede a person’s ability to access spoken or written information include hearing or visual impairments, which may or may not be associated with an underlying condition. The use of British Sign Language interpreters or translation of written materials to other languages including Braille is extremely important for those with hearing or visual impairment [ 27 ]. Beyond this, the heterogeneity amongst people with communication disabilities requires adaptations to be tailored to individual needs based on knowledge of the person’s communication strengths and difficulties. People with stroke-related language impairments (aphasia), for example, may benefit from the information being presented using active language, shorter sentences, or written keywords [ 28 ].

The challenges

Making changes to support communication needs is complex. Some researchers find current guidance such as the Mental Capacity Act Code of Practice [ 27 ] and Health Research Authority guidance [ 29 ] difficult to interpret and implement [ 30 , 31 ]. Researchers acknowledge a lack of skills, knowledge, and confidence in being able to adapt their language and communication to meet the needs of people with communication disabilities [ 31 ]. Other barriers identified include the lack of specific training, tools, time and access to ethically approved materials [ 31 , 32 , 33 ].

There is limited evidence relating to the inclusion of people with communication disabilities in the informed consent process. This is in part because people with communication disabilities often have been excluded from study recruitment processes [ 17 , 30 , 31 , 33 , 34 , 35 ], and because studies that have included them have tended not to report the recruitment and consent methods used [ 32 ].

Current research and guidance

People with communication disabilities may not be included in the informed consent process for different reasons: this group is frequently defined as ineligible for inclusion in studies per se, solely due to their communication disabilities [ 31 ]; even where included, researchers may consult proxies (e.g. family members) because they assume that people with communication disabilities lack the mental capacity to provide informed consent [ 17 , 31 , 33 ]; researchers may find the consent process for this group too challenging and time-consuming [ 31 ]. Reluctance to include people with communication disabilities in the consent process may follow challenges involving people with significant communication disabilities in patient and public involvement and engagement activity, and current involvement guidance does not provide specific information about how to include this group [ 36 ]. Recent UK studies have helped to contextualise these findings, by examining the legal, policy, and governance frameworks that apply to the recruitment of people with communication disabilities [ 30 , 37 , 38 ]. Whilst not specific to trials, these frameworks provide guidance for facilitating the inclusion of this group in the informed consent process. This includes recommendations to co-produce information materials with people with communication disabilities and to adapt communication environments and processes to improve their accessibility. These recommendations are supported by research that has developed and tested communication methods to support decision-making during the informed consent process for people with post-stroke aphasia [ 22 , 32 , 39 ] and intellectual disability [ 33 , 40 ].

In recent examples, researchers have been able to create and use accessible consent materials and implement these within stroke trials [ 41 , 42 , 43 ] using practical, evidence-based resources [ 22 , 44 , 45 ]. These have been co-produced to ensure the language is accessible, readable, and accompanied by transparent visual representations and alternative mediums (video for example). Furthermore, the recent ASSENT [ 46 ] and CONSULT [ 47 ] projects have developed inclusive consent guidance and resources to aid researchers.

Future research

More research is required to explore the inclusion of people with communication disabilities in the informed consent process in trials, in terms of current practice and professional and participant experience. Most existing research appears to have focused on two main groups: people with post-stroke aphasia and people with intellectual disabilities. Future research should explore the experiences and needs of people with different types of communication disabilities, for example, people living with dementia or with other progressive neurological conditions.

Further research is required to develop and evaluate additional tools, resources, and training interventions to support researchers to work with people with communication disabilities more easily and effectively during the informed consent process [ 37 ]. Evaluation should include the exploration of usability, acceptability to professionals and participants, and cost-effectiveness. In addition, studies should explore how researchers can form successful and equitable collaborations with people with communication disabilities as part of trial public involvement and engagement activity in order to co-produce inclusive consent processes and materials [ 48 ].

Trials involving adults whose capacity fluctuates or is lost during a trial

Informed consent can only be obtained from individuals who have the capacity to give consent. Fluctuating capacity can refer to situations where a person’s condition is cyclical (moving from an acute phase to a recovery phase) [ 49 ] or where their capacity is influenced by other factors including but not limited to health or environment [ 50 , 51 ]. It can also relate to capacity that is task-specific, where an individual may have the capacity to consent to certain aspects of a trial but may struggle to give informed consent to all aspects or understand long-term follow-up processes.

Fluctuating capacity raises three main challenges: (1) the potential exclusion of those believed to have fluctuating capacity where no clear assessment process is in place, (2) the need for a process of consent-taking at each data collection time point, and (3) the need to incorporate planning for a loss of capacity, temporary or otherwise, when creating trial processes, patient information and consent materials. Without forward planning, unanticipated lost capacity during data collection may lead to withdrawal and/or missing data and the unnecessary exclusion of participants [ 52 ].

Capacity is often framed (and commonly understood and implemented by recruiting staff) in binary terms as something a person has or does not have [ 53 , 54 ], which has been critiqued in certain populations and cultural contexts [ 55 ]. In England and Wales, the Mental Capacity Act 2005 makes it clear that capacity is task-specific. Once assessed, capacity is not an end point but an ongoing process of engagement with a participant.

An intention to carry out capacity assessments is often alluded to in trial protocols without further detail being given on why certain individuals will be assessed, who will conduct assessments, and what criteria they will use [ 9 ]. The Mental Capacity Act 2005 and the Code of Practice (2007) exist to protect individuals, but not to impede their right to participate in research, something researchers should acknowledge. However, there is a lack of practical guidance in these documents which results in uncertainty about how researchers should best assess capacity. This can lead to inconsistent approaches to assessment. Capacity should be assumed in individuals, and capacity assessments should also only take place after the individual has been given clear information, appropriate to their needs, and there is a question raised about their ability to provide informed consent. This again raises challenges for trials where standard information is required that can be complex, lengthy, and difficult to adapt to the needs of different groups (for example, people with communication disabilities) [ 56 , 57 ].

Suggestions for alternative forms of consent that may support those whose capacity fluctuates have been developed by researchers working with specific populations [ 58 ], including process consent in dementia research [ 59 ]. These distinguish between time and task-specific capacity and the capacity to take a longitudinal view, implying an understanding of future risks and benefits [ 49 ]. However, research to date often focuses on distinct populations, e.g. people receiving palliative care [ 60 ], people living with dementia, and stroke survivors. Attention to managing fluctuations in capacity is less often seen in population-wide trials. To reduce blanket exclusions for certain populations, and misuse of lack of capacity being used as an exclusion criterion, further research resulting in clear guidance is required.

Standardised tools for capacity assessment have been developed, but there is no gold standard for the assessment of capacity in clinic or in research, nor is there an agreement that any one tool can sufficiently capture the complexity of capacity assessment [ 61 ]. Current Mental Capacity Act-compliant tools remain difficult to adapt to the heterogeneity of the populations for whom capacity fluctuates [ 62 , 63 , 64 ]. Capacity assessment processes are also often only employed in certain trials which anticipate that their target population will require them.

Consent needs to be understood as task- and time-specific and requiring accessible information. Research is needed to generate guidance on what to do if capacity is lost during follow up and it must be based on a defined process of establishing the wishes of participants at the initial consent stages. More evidence is required on the best methods for capacity assessment and how to support researchers to assess capacity. Trials need to build protocols for how to prevent exclusion of those who may fluctuate in capacity to consent and on how to manage data collection from those whose capacity does fluctuate.

Trials involving adults who lack the capacity to consent

Even with support, some people will be unable to provide their own consent to take part in a trial. The exclusion of adults who lack the capacity to consent has been widely documented [ 18 , 65 , 66 ] and is due to a range of intersecting methodological and systemic barriers to their inclusion [ 34 ]. Specific consent-based challenges include the complexity of the patchwork of legal frameworks that govern trials involving adults lacking capacity both within the UK [ 67 ] and internationally [ 68 ], and the uncertainties of applying them in practice [ 69 ]. In the UK, clinical trials of an investigational medicinal product involving adults lacking capacity are governed by the Medicines for Human Use (Clinical Trials) Regulations [ 70 ], with other types of trials covered by mental capacity legislation such as the Mental Capacity Act in England and Wales [ 71 ]. In both cases, there are provisions for an alternative decision-maker to be involved in enrolment decisions, usually a family member or close friend, or someone acting in a professional capacity who is not involved in the research if no one is able or willing to act in a personal capacity [ 70 , 71 ]. For clinical trials, the alternative decision-maker is termed a legal representative and provides consent based on the person’s presumed will [ 70 ], and for other types of research, they act as a consultee and are asked to provide advice about participation based on the person’s wishes and preferences [ 29 ]. However, little guidance is available to families and health and social care professionals about their role in making decisions about trial participation, nor the legal basis for their decision [ 72 ].

Due in part to this legal complexity, a lack of knowledge about research involving adults who lack capacity, and paternalistic attitudes generally, may result in gatekeeping practices by researchers and health and social care professionals towards this population [ 73 , 74 ]. Involving health and social care professionals as a consultee or legal representative relies on them having the time and willingness to be involved. Some may be concerned about being unable to determine or represent the wishes and preferences that a person may hold and so may decline to become involved [ 38 ]. Other challenges arise due to the difficulties in identifying and contacting consultees and legal representatives [ 75 ]. Even when they have been identified, family members are less likely to agree to research participation on the person’s behalf than patients themselves [ 76 ]. This may be due to families’ difficulties in knowing what that person’s wishes and preferences would be about participation [ 77 ]. People rarely discuss their research preferences in the event that they might lose capacity, and there is no current mechanism in the UK for prospectively appointing a consultee or legal representative to make decisions about research [ 78 ].

Procedures for identifying and approaching consultees and legal representatives are one of the issues that research ethics committees (RECs) consider when reviewing applications for trials involving adults who lack capacity, alongside arrangements for assessing capacity to consent where required [ 29 ]. However, RECs’ resistance to the inclusion of adults who lack capacity in a trial, and whether there is sufficient justification to do so, is cited as one of the greatest barriers to their inclusion [ 79 , 80 ]. RECs do not interpret the legal frameworks consistently or, at times, correctly, with inaccurate terminology and requirements being cited [ 37 , 72 , 81 ]. There have been calls for greater explicitness and accuracy when applications for ethical review of these studies are both submitted and reviewed [ 72 , 81 ] and for incorporating more adaptations and accommodations into the recruitment process such as ensuring information is cognitively accessible [ 37 ].

Recent research has identified a number of barriers and facilitators to involving adults lacking consent in trials [ 19 , 34 ] leading to the creation of guidance, for example, for recruiting adults with impaired mental capacity at the end of life in research [ 19 ]. Recent initiatives to address the inclusion of underserved groups in research more broadly, such as the NIHR INCLUDE project [ 1 ], have led to the development of the INCLUDE Impaired Capacity to Consent Framework which is a tool to help researchers to design and conduct trials that are more inclusive of people with impaired capacity to consent [ 82 ].

Other studies have focused on the role of personal consultees and legal representatives. This includes a study that found that making ethically complex decisions about research on behalf of someone else can be challenging for many family members, with some experiencing a decisional and emotional burden as a result [ 83 ]. Current work includes the development of the first decision aid for families making decisions about research on behalf of someone who lacks the capacity to consent [ 84 ] which is currently being evaluated as a ‘Study Within a Trial’ (or ‘SWAT’) (CONSULT) [ 85 ] and the development of resources to help researchers [ 47 ].

Despite the ongoing work, there is a need for a more sustained effort to ensure that these groups have an equitable opportunity to participate in trials. More research is needed into how researchers can design more inclusive trials, and the involvement of health and social care professionals as nominated consultees, and the use of professional legal representatives when necessary. Unlike questions about why other underserved groups have been excluded from research, the legal position regarding people who lack capacity is that their inclusion requires justification [ 29 ]. Clearer guidance is required on how this justification is understood and interpreted.

A number of recommendations for further research at a policy and legislation level have been previously made, including proposals by the Nuffield Council on Bioethics [ 86 ] that consideration be given to extend the role of the welfare attorney in England and Wales to include decisions about research, both within the Mental Capacity Act [ 71 ] and the Clinical Trials Regulations [ 70 ]. There is also uncertainty about the role of Lasting Power of Attorney in decisions about research participation [ 78 ], with families wanting greater support and guidance when making decisions [ 83 ].

Adult and paediatric emergency and urgent care trials

Trials involving adults in emergency situations may encounter additional complexities. The challenges of obtaining consent from patients who are suddenly unable to communicate or convey their own wishes are encountered in trial contexts ranging from intrapartum [ 87 ] and acute coronary syndrome [ 88 ] to acute stroke where it has been described as the rate-limiting step in treatment RCTs [ 89 ]. Emergency and urgent care trials are conducted in a range of settings including prehospital [ 90 ] and critical care [ 91 ].

Historically, children have not received evidence-based healthcare in emergency and critical care settings due to their exclusion from trials arising from similar practical and ethical issues to those encountered in adult trials in these time-critical settings [ 92 ]. In order to increase the chances of saving a child’s life, treatments need to be given without delay, so there is no time to seek informed consent from parents or legal representatives. Even if there is a brief window of opportunity for recruitment discussions, parents may not be present or may be highly distressed and lack the capacity to make an informed decision about the use of their child’s information and potential ongoing involvement [ 93 ].

Emergency research is when treatment needs to be given urgently [ 94 ] and recruitment cannot be delayed until the patient either regains capacity or a consultee or legal representative can be found [ 95 ]. In such circumstances, research without prior consent (RWPC, also referred to as ‘deferred consent’) is permissible in many jurisdictions including the USA, Canada, parts of Australasia, and the UK through both the Mental Capacity Act [ 71 ] and the 2006 Amendment to the 2004 EU Clinical Trials Regulations [ 96 ]. However, there are variations in the provisions for RWPC in emergency research, both between and within countries [ 97 ]. Within the UK, for example, the law in Scotland does not provide any ‘exemptions’ or alternatives for the involvement of adults not able to consent for themselves in clinical trials in emergency situations [ 94 ]. This meant that trials such as RECOVERY-RS [ 98 ], which compared respiratory strategies for patients with COVID-19 respiratory failure, could not recruit Scottish patients. Similarly, the UK-REBOA trial in life-threatening torso haemorrhage was unable to recruit in Scotland despite being coordinated from there [ 99 ].

In recognition of the need to conduct these vital trials with children, various legal frameworks for paediatric trials have also been amended nationally and internationally, enabling research to be conducted without prior consent. In 2008, UK legislation was amended to allow research without prior consent in such circumstances [ 100 ], yet there was a lack of knowledge about how and when research teams should broach these research discussions with parents in a way that avoided further burdening families. There was also a need for guidance to inform what should happen when a child dies after trial enrolment without parents’ prior knowledge or consent. Despite the 2008 legislation that enabled much-needed research on emergency treatments for children, there was hesitancy amongst clinical and research communities about conducting trials involving critically ill children [ 101 ].

The use of RWPC in both adult and child populations is ethically complex, with diverse views about the acceptability of enrolling acutely ill patients without consent [ 102 ]. There are particular challenges around gaining ethical approval for the use of RWPC in borderline or ‘middle ground’ cases where a patient may be conscious or coherent, yet their condition or the lack of time limits the possibility of informed consent [ 103 ]. These, and other challenges [ 34 ], can lead to consent-based recruitment bias which means that patients enrolled in RCTs may not necessarily be representative of critically ill patients in clinical practice [ 20 , 104 ]. This has the potential to cause harm by obscuring any treatment effect [ 105 ].

A recent study in the UK (Perspectives Study) explored consent and recruitment in adult critical care research [ 106 ] and identified strategies to enhance consent and recruitment processes. This led to the development of good practice guidance and other resources including an accessible animation for members of the public [ 107 ]. An animation aimed at adults enrolled in emergency care research which describes RWPC was developed by another research team (CoMMiTED Study) [ 108 ]. Systematic reviews have explored stakeholders’ views about the acceptability of RWPC [ 109 ], including ethnic minority populations’ views [ 110 ]. Such studies have found that RWPC is generally acceptable to patients, families, and practitioners but highlighted the importance of contextual factors.

The CATheter infections in Children Trial (CATCH) was the first UK trial to include research without prior consent when comparing the effectiveness of different types of central venous catheters to prevent bloodstream infections in children. An embedded study (called CONNECT [ 111 ]) explored parent and practitioner views and experiences of recruitment and consent and found that parents were momentarily shocked or surprised when they were informed that their child had already been entered into CATCH without their consent [ 101 ]. However, initial concerns were often quickly addressed by practitioner explanations about why it had not been possible to seek consent before enrolment and how the trial interventions were already used in clinical care. To prevent burden and assist decision-making, parents stated it was important for the research staff to assess the appropriate timing of research discussions after a child’s enrolment in a trial. They suggested that the researcher should consult with the bedside nurse about appropriate timing and only approach parents after the initial emergency situation has passed, when a child’s condition has stabilised [ 101 ]. The CONNECT study used these findings alongside wider research, involving practitioners, families [ 112 ], and children [ 113 ] with experience in emergency care, to develop guidance for future paediatric and neonatal trials [ 114 ]. Since its publication in 2015, CONNECT guidance has informed the successful conduct of five studies. This includes the first clinical trial of a drug for long-lasting seizures (EcLiPSE trial), which successfully recruited to time and target with a 93% consent rate and led to changes in clinical guidelines for children in status epilepticus [ 115 ].

Research into consent in emergency settings is high on the trials methodological research agenda and was identified as a research priority by Clinical Trials Units in a UK survey [ 116 ]. Areas for future research involving adults identified by the Perspectives Study included the need for evidence-based guidance on the procedures for professionals acting as a consultee or legal representative and identifying strategies to communicate with relatives of critically ill patients about research, including where a participant enrolled without prior consent subsequently dies [ 106 ]. The NIHR RfPB-funded study ‘ENHANCE’ will begin in 2023 and aims to address this gap in knowledge through the involvement of bereaved families and other key stakeholders.

Ongoing work in paediatric populations aims to assess and refine CONNECT guidance in low- and middle-income countries. Further work is needed to explore views on research without prior consent in underserved populations, such as parents who do not speak English and who are often excluded from qualitative studies and guidance development.

The need for more guidance for RECs who are reviewing emergency and urgent care trials and support for consent processes for patients and members of the public who join research teams and advise on studies, has also been highlighted [ 106 , 109 , 117 ].

Conclusions

The need for alternative consent processes that address the inadvertent exclusion of certain populations has been detailed in this article. Drives for trial efficiency, lack of funding, or time for adaptation often result in the exclusion of certain populations. However, inequities in health outcomes will continue to be exacerbated by health research until trials become more inclusive of underserved populations. Alongside methodological innovation, further research is required to establish good practice, develop evidence-based guidance, and support skill acquisition in the global research workforce. Our key recommendations for future research are summarised in Table 1 . Importantly, this should be done in collaboration with people with lived experience and those who care for them.

The populations detailed above are not the only areas where consent is complex or alternative pathways are required. Some trials have complex consent processes, not because of their recruited population, but due to an innovative treatment or trial design, such as cluster RCTs and Trials within Cohorts (TwiCs) [ 118 ]. As we progress with the innovation of trial design, we must progress methodological innovation in consent at the same pace or risk leaving certain populations behind. Many of the methodological lessons learnt and proposed adjustments, such as the routine provision of accessible information, could also benefit other underserved groups including those with lower literacy levels and English language proficiency, as well as the wider population of potential research participants.

The TMRP Complex and Alternate Consent Pathways group is driving forward this research agenda in the UK and is open to new members to share methodological learning. We have updated the NIHR Clinical Trials Toolkit [ 119 ] to reflect the most up-to-date research in this area. However, as this commentary has shown, current guidance remains limited in its utility and requires greater clarity and practical applicability for researchers, participants, family members, and ethical review committees. We are keen to use the momentum of the group to identify others with an interest in this area in order to collaboratively develop the research agenda and address the consent-based ethical and methodological challenges in trials. Many of these issues are not restricted to the UK but are encountered internationally, which raises additional challenges when conducting multi-national trials [ 58 , 97 , 120 ]. We encourage researchers from other regions and jurisdictions to share their experiences and ongoing research programmes and to contribute to developing an international research agenda to address these global challenges.

Availability of data and materials

Not applicable as no dataset was generated.

Witham MD, Anderson E, Carroll C, Dark PM, Down K, Hall AS, et al. Developing a roadmap to improve trial delivery for under-served groups: results from a UK multi-stakeholder process. Trials. 2020;21:694.

Article   PubMed   PubMed Central   Google Scholar  

Mundi S, Chaudhry H, Bhandari M. Systematic review on the inclusion of patients with cognitive impairment in hip fracture trials: a missed opportunity? Can J Surg. 2014;57:E141–5.

Taylor JS, DeMers SM, Vig EK, Borson S. The disappearing subject: exclusion of people with cognitive impairment and dementia from geriatrics research. J Am Geriatr Soc. 2012;60:413–9.

Article   PubMed   Google Scholar  

Jensen JS, Reiter-Theil S, Celio DA, Jakob M, Vach W, Saxer FJ. Handling of informed consent and patient inclusion in research with geriatric trauma patients – a matter of protection or disrespect? Clin Interv Aging. 2019;14:321–34.

Thomalla G, Boutitie F, Fiebach JB, Simonsen CZ, Nighoghossian N, Pedraza S, et al. Effect of informed consent on patient characteristics in a stroke thrombolysis trial. Neurology. 2017;89:1400–7.

Vassallo M. Research and reducing inequity in healthcare. Age Ageing. 2019;48:474–5.

Landes SD, Stevens JD, Turk MA. Cause of death in adults with intellectual disability in the United States. J Intellect Disabil Res. 2021;65 Part 1:47–59.

Article   Google Scholar  

Learning Disabilities Mortality review (LeDeR) report 2020: University of Bristol Norah Fry Centre for Disability Studies; 2020.  https://www.bristol.ac.uk/media%02library/sites/sps/leder/LeDeR%20programme%20annual%20report%2013.05.2021%20FINAL.pdf .

Feldman MA, Bosett J, Collet C, Burnham-Riosa P. Where are persons with intellectual disabilities in medical research? A survey of published clinical trials. J Intellect Disabil Res. 2014;58:800–9.

Article   CAS   PubMed   Google Scholar  

Striving for diversity in research studies. N Engl J Med. 2021;385:1429–30.

Spong CY, Bianchi DW. Improving public health requires inclusion of underrepresented populations in research. JAMA. 2018;319:337.

Centre for Drug Evaluation and Research. Enhancing the diversity of clinical trial populations — eligibility criteria, enrollment practices, and trial designs guidance for industry: FDA; 2020. https://www.fda.gov/regulatory%02information/search-fda-guidance-documents/enhancing-diversity-clinical-trial-populations-eligibility-criteria%02enrollment-practices-and-trial .

National Institute for Health Research. Best research for best health: the next chapter. 2021.

Google Scholar  

Treweek S, Pitkethly M, Cook J, Fraser C, Mitchell E, Sullivan F, et al. Strategies to improve recruitment to randomised trials. Cochrane Database Syst Rev. 2018. https://doi.org/10.1002/14651858.MR000013.pub6 .

Pietrzykowski T, Smilowska K. The reality of informed consent: empirical studies on patient comprehension—systematic review. Trials. 2021;22:57.

Nathe JM, Krakow EF. The challenges of informed consent in high-stakes, randomized oncology trials: a systematic review. MDM Policy Pract. 2019;4:2381468319840322.

PubMed   PubMed Central   Google Scholar  

Brady MC, Fredrick A, Williams B. People with aphasia: capacity to consent, research participation and intervention inequalities. Int J Stroke. 2013;8:193–6.

Shepherd V, Wood F, Griffith R, Sheehan M, Hood K. Protection by exclusion? The (lack of) inclusion of adults who lack capacity to consent to research in clinical trials in the UK. Trials. 2019. https://doi.org/10.1186/s13063-019-3603-1 .

Evans CJ, Yorganci E, Lewis P, Koffman J, Stone K, Tunnard I, et al. Processes of consent in research for adults with impaired mental capacity nearing the end of life: systematic review and transparent expert consultation (MORECare_Capacity statement). BMC Med. 2020;18:221.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Holcomb JB, Weiskopf R, Champion H, Gould SA, Sauer RM, Brasel K, et al. Challenges to effective research in acute trauma resuscitation: consent and endpoints. Shock. 2011;35:107–13.

Ward RM, Benjamin DK, Davis JM, Gorman RL, Kauffman R, Kearns GL, et al. The need for pediatric drug development. J Pediatr. 2018;192:13–21.

Jayes M, Palmer R. Initial evaluation of the Consent Support Tool: a structured procedure to facilitate the inclusion and engagement of people with aphasia in the informed consent process. Int J Speech Lang Pathol. 2014;16:159–68.

Moye J, Marson DC. Assessment of decision-making capacity in older adults: an emerging area of practice and research. J Gerontol Series B. 2007;62:P3–11.

Kagan A, Kimelman MDZ. Informed consent in aphasia research: myth or reality. Clin Aphasiol. 1995;23:65–75.

Triebel KL, Martin RC, Novack TA, Dreer L, Turner C, Pritchard PR, et al. Treatment consent capacity in patients with traumatic brain injury across a range of injury severity. Neurology. 2012;78:1472–8.

Hamilton J, Ingham B, McKinnon I, Parr JR, Tam LY-C, Couteur AL. Mental capacity to consent to research? Experiences of consenting adults with intellectual disabilities and/or autism to research. Br J Learn Disabil. 2017;45:230–7.

Department of Constitutional Affairs. Mental Capacity Act 2005: code of practice. The Stationary Office. 2007. https://doi.org/10.1108/eb003163 .

Zuscak SJ, Peisah C, Ferguson A. A collaborative approach to supporting communication in the assessment of decision-making capacity. Disabil Rehabil. 2016;38:1107–14.

Health Research Authority. Health Research Authority: Mental Capacity Act. Health Research Authority. https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/mental-capacity-act/ . Accessed 16 Jul 2021.

Ryan H, Heywood R, Jimoh O, Killett A, Langdon PE, Shiggins C, et al. Inclusion under the Mental Capacity Act (2005): a review of research policy guidance and governance structures in England and Wales. Health Expect. 2021;24:152–64.

Jayes MJ, Palmer RL. Stroke research staff’s experiences of seeking consent from people with communication difficulties: results of a national online survey. Top Stroke Rehabil. 2014;21:443–51.

Penn C, Frankel T, Watermeyer J, Müller M. Informed consent and aphasia: evidence of pitfalls in the process. Aphasiology. 2009;23:3–32.

Taua C, Neville C, Hepworth J. Research participation by people with intellectual disability and mental health issues: an examination of the processes of consent. Int J Ment Health Nurs. 2014;23:513–24.

Shepherd V. An under-represented and underserved population in trials: methodological, structural, and systemic barriers to the inclusion of adults lacking capacity to consent. Trials. 2020;21:445.

Townend E, Brady M, McLaughlan K. A systematic evaluation of the adaptation of depression diagnostic methods for stroke survivors who have aphasia. Stroke. 2007;38:3076–83.

Jayes M, Moulam L, Meredith S, Whittle H, Lynch Y, Goldbart J, et al. Making public involvement in research more inclusive of people with complex speech and motor disorders: the I-ASC Project. Qual Health Res. 2021;31:1260–74.

Jimoh OF, Ryan H, Killett A, Shiggins C, Langdon PE, Heywood R, et al. A systematic review and narrative synthesis of the research provisions under the Mental Capacity Act (2005) in England and Wales: recruitment of adults with capacity and communication difficulties. PLoS One. 2021;16:e0256697.

Heywood R, Ryan H, Killett A, Langdon P, Plenderleith Y, Shiggins C, et al. Lost voices in research: exposing the gaps in the Mental Capacity Act 2005. Med Law Int. 2019;19:81–112.

Stein J, Brady Wagner LC. Is informed consent a “yes or no” response? Enhancing the shared decision-making process for persons with aphasia. Top Stroke Rehabil. 2006;13:42–6.

Cameron L, Murphy J. Obtaining consent to participate in research: the issues involved in including people with a range of learning and communication disabilities. Br J Learn Disabil. 2007;35:113–20.

Thomas SA, Drummond AE, Lincoln NB, Palmer RL, das Nair R, Latimer NR, et al. Behavioural activation therapy for post-stroke depression: the BEADS feasibility RCT. Health Technol Assess. 2019;23:1–176.

Hilari K, Behn N, Marshall J, Simpson A, Thomas S, Northcott S, et al. Adjustment with aphasia after stroke: study protocol for a pilot feasibility randomised controlled trial for SUpporting wellbeing through PEeR Befriending (SUPERB). Pilot Feasibility Stud. 2019;5:14.

Palmer R, Dimairo M, Cooper C, Enderby P, Brady M, Bowen A, et al. Self-managed, computerised speech and language therapy for patients with chronic aphasia post-stroke compared with usual care or attention control (Big CACTUS): a multicentre, single-blinded, randomised controlled trial. Lancet Neurol. 2019;18:821–33.

Consent Support Tool, J&R Press. https://www.jr-press.co.uk/consent-support-tool.html . Accessed 13 May 2022.

Pearl G, Cruice M. Facilitating the involvement of people with aphasia in stroke research by developing communicatively accessible research resources. Top Lang Disord. 2017;37:67–84.

ASSENT project. https://www.uea.ac.uk/groups-and-centres/assent . Accessed 13 May 2022.

Capacity and consent to research. CONSULT. https://www.capacityconsentresearch.com/ . Accessed 27 Sep 2021.

Volkmer A, Broomfield K. Seldom heard voices in service user involvement: J&R Press; 2022.

Griffith R. Assessing capacity in cases of fluctuating decision-making ability. Br J Nurs. 2020;29. https://doi.org/10.12968/bjon.2020.29.15.908 .

Stroup S, Appelbaum P. The subject advocate: protecting the interests of participants with fluctuating decisionmaking capacity. IRB. 2003;25:9–11.

Bracken-Roche D, Bell E, Racine E. The “vulnerability” of psychiatric research participants: why this research ethics concept needs to be revisited. Can J Psychiatr. 2016;61:335–9.

Crow G, Wiles R, Heath S, Charles V. Research ethics and data quality: the implications of informed consent. Int J Soc Res Methodol. 2006;9:83–95.

Clough B. What about us? A case for legal recognition of interdependence in informal care relationships. J Soc Welf Fam Law. 2014;36:129–48.

Clough B. Disability and vulnerability: challenging the capacity/incapacity binary. Soc Policy Soc. 2017;16:469–81.

Akpa-Inyang F, Chima SC. South African traditional values and beliefs regarding informed consent and limitations of the principle of respect for autonomy in African communities: a cross-cultural qualitative study. BMC Med Ethics. 2021;22:111.

Isaacs T, Murdoch J, Demjén Z, Stevenson F. Examining the language demands of informed consent documents in patient recruitment to cancer trials using tools from corpus and computational linguistics. Health (London). 2020. https://doi.org/10.1177/1363459320963431 .

Duong Q, Mandrekar SJ, Winham SJ, Cook K, Jatoi A, Le-Rademacher JG. Understanding verbosity: funding source and the length of consent forms for cancer clinical trials. J Cancer Educ. 2021;36:1248–52.

Lindley RI, Kane I, Cohen G, Sandercock PA. Factors influencing the use of different methods of consent in a randomized acute stroke trial: the Third International Stroke Trial (IST-3). Int J Stroke. 2021. https://doi.org/10.1177/17474930211037123 .

Dewing J. Participatory research: a method for process consent with persons who have dementia. Dementia. 2007;6:11–25.

Casarett DJ, Karlawish JH. Are special ethical guidelines needed for palliative care research? J Pain Symptom Manag. 2000;20:130–9.

Article   CAS   Google Scholar  

Pennington C, Davey K, ter Meulen R, Coulthard E, Kehoe PG. Tools for testing decision-making capacity in dementia. Age Ageing. 2018;47:778–84.

Kim SYH, Caine ED, Currier GW, Leibovici A, Ryan JM. Assessing the competence of persons with Alzheimer’s disease in providing informed consent for participation in research. AJP. 2001;158:712–7.

Casarett DJ. Assessing decision-making capacity in the setting of palliative care research. J Pain Symptom Manag. 2003;25:S6–13.

Dunn LB, Nowrangi MA, Be M, Palmer BW, Jeste DV, Saks ER. Assessing decisional capacity for clinical research or treatment: a review of instruments. Am J Psychiatry. 2006;163(8):1323–34. https://doi.org/10.1176/ajp.2006.163.8.1323 .

Trivedi RB, Humphreys K. Participant exclusion criteria in treatment research on neurological disorders: are unrepresentative study samples problematic? Contemp Clin Trials. 2015;44:20–5.

Sheehan KJ, Fitzgerald L, Hatherley S, Potter C, Ayis S, Martin FC, et al. Inequity in rehabilitation interventions after hip fracture: a systematic review. Age Ageing. 2019;48:489–97.

Shepherd V. Research involving adults lacking capacity to consent: the impact of research regulation on “evidence biased” medicine. BMC Med Ethics. 2016;17:8.

Tridente A, Holloway PAH, Hutton P, Gordon AC, Mills GH, et al. Methodological challenges in European ethics approvals for a genetic epidemiology study in critically ill patients: the GenOSept experience. BMC Med Ethics. 2019;20. https://doi.org/10.1186/s12910-019-0370-1 .

Fletcher JR, Lee K, Snowden S. Uncertainties when applying the Mental Capacity Act in dementia research: a call for researcher experiences. Ethics Soc Welfare. 2019;13:183–97.

The Medicines for Human Use (Clinical Trials) Regulations. 2004. SI No.1031. https://doi.org/10.1186/s13063-023-07159-6 . https://www.legislation.gov.uk/uksi/2004/1031/contents/ .

HMSO, London. Mental Capacity Act 2005. https://www.legislation.gov.uk/ukpga/2005/9/contents .

Shepherd V, Wood F, Griffith R, Sheehan M, Hood K. Research involving adults lacking capacity to consent: a content analysis of participant information sheets for consultees and legal representatives in England and Wales. Trials. 2019;20:233.

Bravo G, Wildeman S, Dubois M-FF, Kim SY, Cohen C, Graham J, et al. Substitute consent practices in the face of uncertainty: a survey of Canadian researchers in aging. Int Psychogeriatr. 2013;25:1821–30.

Shepherd V, Griffith R, Sheehan M, Wood F, Hood K. Healthcare professionals’ understanding of the legislation governing research involving adults lacking mental capacity in England and Wales: a national survey. J Med Ethics. 2018. https://doi.org/10.1136/medethics-2017-104722 .

Shepherd V, Davies J. Conducting a randomized controlled trial in care homes: the challenges of recruiting residents who lack capacity to consent. SAGE Research Methods Cases: Medicine and Health. 2020. https://doi.org/10.4135/9781529726626 .

Mason S, Barrow H, Phillips A, Eddison G, Nelson A, Cullum N, et al. Brief report on the experience of using proxy consent for incapacitated adults. J Med Ethics. 2006;32:61–2.

Shepherd V, Sheehan M, Hood K, Griffith R, Wood F. Constructing authentic decisions: proxy decision-making for research involving adults who lack capacity to consent. J Med Ethics. 2020. https://doi.org/10.1136/medethics-2019-106042 .

Shepherd V, Griffith R, Hood K, Sheehan M, Wood F. “There’s more to life than money and health”: family caregivers’ views on the role of Power of Attorney in proxy decisions about research participation for people living with dementia. Dementia (London). 2019. https://doi.org/10.1177/1471301219884426 .

Head MG, Walker SL, Nalabanda A, Bostock J, Cassell JA. Researching scabies outbreaks among people in residential care and lacking capacity to consent: a case study. Public Health Ethics. 2015;10:90–5.

Griffiths S, Manger L, Chapman R, Weston L, Sherriff I, Quinn C, et al. Letter on “Protection by exclusion? The (lack of) inclusion of adults who lack capacity to consent to research in clinical trials in the UK”. Trials. 2020;21:104. https://doi.org/10.1186/s13063-020-4054-4 .

Dixon-Woods M, Angell EL. Research involving adults who lack capacity: how have research ethics committees interpreted the requirements? J Med Ethics. 2009;35:377–81.

Implementation of the ‘INCLUDE Impaired Capacity to Consent Framework’ for researchers. Cardiff University. https://www.cardiff.ac.uk/centre-for-trials-research/research/studies-and-trials/view/implementation-of-the-include-impaired-capacity-to-consent-framework-for-researchers . Accessed 9 Aug 2022.

Shepherd V, Hood K, Sheehan M, Griffith R, Wood F. ‘It’s a tough decision’: a qualitative study of proxy decision-making for research involving adults who lack capacity to consent in UK. Age and Ageing. 2019;48(6):903–9. https://doi.org/10.1093/ageing/afz115 .

Shepherd V, Wood F, Griffith R, Sheehan M, Hood K. Development of a decision support intervention for family members of adults who lack capacity to consent to trials. BMC Med Inform Decis Mak. 2021;21:30.

Shepherd V, Wood F, Gillies K, Martin A, O’Connell A, Hood K. Feasibility, effectiveness and costs of a decision support intervention for consultees and legal representatives of adults lacking capacity to consent (CONSULT): protocol for a randomised study within a trial. 2022;23:957. https://doi.org/10.1186/s13063-022-06887-5 .

Nuffield Council on Bioethics. Dementia: ethical issues. 2009:172. https://www.nuffieldbioethics.org/wp-content/uploads/2014/07/Dementia-short-guide.pdf .

Vernon G, Alfirevic Z, Weeks A. Issues of informed consent for intrapartum trials: a suggested consent pathway from the experience of the Release trial [ISRCTN13204258]. Trials. 2006;7:13.

Iwanowski P, Budaj A, Członkowska A, Wąsek W, Kozłowska-Boszko B, Olędzka U, et al. Informed consent for clinical trials in acute coronary syndromes and stroke following the European Clinical Trials Directive: investigators’ experiences and attitudes. Trials. 2008;9:45.

Rose D, Kasner S. Informed consent: the rate-limiting step in acute stroke trials. Front Neurol. 2011;2. https://doi.org/10.3389/fneur.2011.00065 .

Armstrong S, Langlois A, Siriwardena N, Quinn T. Ethical considerations in prehospital ambulance based research: qualitative interview study of expert informants. BMC Med Ethics. 2019;20:88.

Ecarnot F, Quenot J-P, Besch G, Piton G. Ethical challenges involved in obtaining consent for research from patients hospitalized in the intensive care unit. Ann Transl Med. 2017;5(Suppl 4):S41.

Kanthimathinathan HK, Scholefield BR. Dilemmas in undertaking research in paediatric intensive care. Arch Dis Child. 2014;99:1043–9.

Maitland K, Molyneux S, Boga M, Kiguli S, Lang T. Use of deferred consent for severely ill children in a multi-centre phase III trial. Trials. 2011;12:90.

Research in emergency settings. Health Research Authority. https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/research-emergency-settings/ . Accessed 17 May 2022.

Medical Research Council. MRC Ethics Guide 2007: Medical research involving adults who cannot consent. MRC; 2007. https://www.ukri.org/wp%20content/uploads/2021/08/MRC%020208212-Medical-research-involving-adults-who-cannot-consent.pdf .

The Medicines for Human Use (Clinical Trials) Amendment (No.2) Regulations 2006. https://www.legislation.gov.uk/uksi/2006/1928/contents/made . 

Kompanje EJO, Maas AIR, Menon DK, Kesecioglu J. Medical research in emergency research in the European Union member states: tensions between theory and practice. Intensive Care Med. 2014;40:496–503.

Perkins GD, Ji C, Connolly BA, Couper K, Lall R, Baillie JK, et al. Effect of noninvasive respiratory strategies on intubation or mortality among patients with acute hypoxemic respiratory failure and COVID-19: the RECOVERY-RS randomized clinical trial. JAMA. 2022;327:546–58.

Jansen JO, Cochran C, Boyers D, Gillies K, Lendrum R, Sadek S, et al. The effectiveness and cost-effectiveness of resuscitative endovascular balloon occlusion of the aorta (REBOA) for trauma patients with uncontrolled torso haemorrhage: study protocol for a randomised clinical trial (the UK-REBOA trial). Trials. 2022;23:384.

The Medicines for Human Use (Clinical Trials) and Blood Safety and Quality (Amendment) Regulations 2008. https://www.legislation.gov.uk/uksi/2008/941/contents/made . Accessed 13 May 2022.

Woolfall K, Frith L, Gamble C, Gilbert R, Mok Q, Young B, et al. How parents and practitioners experience research without prior consent (deferred consent) for emergency research involving children with life threatening conditions: a mixed method study. BMJ Open. 2015;5:e008522.

van der Graaf R, Hoogerwerf M-A, de Vries MC. The ethics of deferred consent in times of pandemics. Nat Med. 2020;26:1328–30.

Berger BJ. Minimum risk and HEAT-PPCI: innovative ideas for informed consent in emergency medical research. Ann Emerg Med. 2014;64:A17–9.

Zimmermann JB, Horscht JJ, Weigand MA, Bruckner T, Martin EO, Hoppe-Tichy T, et al. Patients enrolled in randomised clinical trials are not representative of critically ill patients in clinical practice: observational study focus on tigecycline. Int J Antimicrob Agents. 2013;42:436–42.

Roberts I, Prieto-Merino D, Shakur H, Chalmers I, Nicholl J. Effect of consent rituals on mortality in emergency care research. Lancet. 2011;377:1071–2.

Paddock K, Woolfall K, Frith L, Watkins M, Gamble C, Welters I, et al. Strategies to enhance recruitment and consent to intensive care studies: a qualitative study with researchers and patient–public involvement contributors. BMJ Open. 2021;11:e048193.

Perspectives Study - Institute of Population Health - University of Liverpool. https://www.liverpool.ac.uk/population-health/research/groups/perspectives/ . Accessed 19 May 2022.

Deferred consent in Emergency Research-a patient video. 2022. https://www.youtube.com/watch?v=P--SEfQOd3w .

Fitzpatrick A, Wood F, Shepherd V. Trials using deferred consent in the emergency setting: a systematic review and narrative synthesis of stakeholders’ attitudes. Trials. 2022;23:411.

Raven-Gregg T, Shepherd V. Exploring the inclusion of under-served groups in trials methodology research: an example from ethnic minority populations’ views on deferred consent. Trials. 2021;22:589.

University of Liverpool. CONNECT - consent methods in paediatric emergency and urgent care trials. https://www.liverpool.ac.uk/population-health-sciences/research/connect/ . Accessed 20 Mar 2020.

Woolfall K, Young B, Frith L, Appleton R, Iyer A, Messahel S, et al. Doing challenging research studies in a patient-centred way: a qualitative study to inform a randomised controlled trial in the paediatric emergency care setting. BMJ Open. 2014;4:e005045.

Roper L, Sherratt FC, Young B, McNamara P, Dawson A, Appleton R, et al. Children’s views on research without prior consent in emergency situations: a UK qualitative study. BMJ Open. 2018;8:e022894.

Woolfall K, Frith L, Dawson A, Gamble C, Lyttle MD, Group the C advisory, et al. Fifteen-minute consultation: an evidence-based approach to research without prior consent (deferred consent) in neonatal and paediatric critical care trials. Arch Dis Childhood Educ Pract. 2016;101:49–53.

Lyttle MD, Rainford NEA, Gamble C, Messahel S, Humphreys A, Hickey H, et al. Levetiracetam versus phenytoin for second-line treatment of paediatric convulsive status epilepticus (EcLiPSE): a multicentre, open-label, randomised trial. Lancet. 2019;393:2125–34.

Tudur Smith C, Hickey H, Clarke M, Blazeby J, Williamson P. The trials methodological research agenda: results from a priority setting exercise. Trials. 2014;15:32.

Shepherd V, Hood K, Wood F. Unpacking the ‘Black Box of Horrendousness’: a qualitative exploration of the barriers and facilitators to conducting trials involving adults lacking capacity to consent. Trials. 2022;23(471). https://doi.org/10.1186/s13063-022-06422-6 .

Young-Afat DA, Verkooijen HAM, van Gils CH, van der Velden JM, Burbach JP, Elias SG, et al. Brief report: staged-informed consent in the cohort multiple randomized controlled trial design. Epidemiology. 2016;27:389–92.

NIHR. NIHR Clinical Trials Toolkit. https://www.ct-toolkit.ac.uk/ . Accessed 12 Sep 2022.

Shepherd V. Advances and challenges in conducting ethical trials involving populations lacking capacity to consent: a decade in review. Contemp Clin Trials. 2020;95:106054.

Download references

Acknowledgements

We would like to thank the wider contributors to the Complex and Alternate Consent Pathways group and the MRC-NIHR Trials Methodology Research Partnership who have participated in the discussions at various stages of this work. JW would like to acknowledge the support of the QuinteT research group, University of Bristol.

No funding was received for this work. VS is supported by a National Institute of Health Research Advanced Fellowship (CONSULT) funded by the Welsh government through Health and Care Research Wales (NIHR-FS(A)-2021). AMR is supported by a Wellcome Trust Fellowship (Capacity, Consent and Autonomy https://capacityconsent.leeds.ac.uk/ ) (219754/Z/19/Z). AV is supported by a National Institute for Health Research Advanced Fellowship (NIHR302240). KG is supported by funding from the Chief Scientist Office of the Scottish Government’s Health and Social Care Directorate (CZU/3/3). This work was supported by the MRC-NIHR Trials Methodology Research Partnership (MR/S014357/1). RH is supported in part by the Wellcome Trust (209841/Z/17/Z and 223290/Z/21/Z), EPSRC (EP/T020792/1), and the NIHR Biomedical Research Centre at University Hospitals Bristol and Weston NHS Foundation Trust and the University of Bristol. RH also serves on various local, regional, and national ethics committees and related groups. None of the organisations played a role in the drafting of this article, and the opinions stated are those of the authors.

Author information

Amy M. Russell and Victoria Shepherd are joint first authors.

Authors and Affiliations

Leeds Institute of Health Sciences, University of Leeds, Leeds, UK

Amy M. Russell

Centre for Trials Research, Cardiff University, 4th floor Neuadd Meirionnydd, Heath Park, Cardiff, CF14 4YS, UK

Victoria Shepherd

Department of Public Health, Policy and Systems, Institute of Population Health, University of Liverpool, Liverpool, UK

Kerry Woolfall & Bridget Young

Health Services Research Unit, University of Aberdeen, Aberdeen, UK

Katie Gillies

Department of Psychology and Language Sciences, University College London, London, UK

Anna Volkmer

Department of Health Professions, Manchester Metropolitan University, Manchester, UK

Centre for Ethics in Medicine, Population Health Science, Bristol Medical School, University of Bristol, Bristol, UK

Richard Huxtable

Department of Non-communicable Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK

Alexander Perkins

Medical Research Council Clinical Trials Unit at University College London (MRC CTU at UCL), Institute of Clinical Trials and Methodology, University College London, London, UK

Nurulamin M. Noor

Centre for Evaluation and Methods, Wolfson Institute of Population Health, Queen Mary University London, London, UK

Beverley Nickolls

Population Health Science, Bristol Medical School, University of Bristol, Bristol, UK

You can also search for this author in PubMed   Google Scholar

Contributions

The original idea for this Complex and Alternate Consent Pathways group (C&ACP) arose from the discussions in the Trial Methodology Research Partnership (TMRP) Qualitative Research group and the Inclusivity subgroup of the Trial Conduct Working Group and was led by JW. All authors are members of the C&ACP group and contributed to the iterative discussion of the content and structure of the manuscript. AR and VS wrote the first draft of the paper, and all authors contributed to the revision of it. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Victoria Shepherd .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Russell, A.M., Shepherd, V., Woolfall, K. et al. Complex and alternate consent pathways in clinical trials: methodological and ethical challenges encountered by underserved groups and a call to action. Trials 24 , 151 (2023). https://doi.org/10.1186/s13063-023-07159-6

Download citation

Received : 01 October 2022

Accepted : 09 February 2023

Published : 28 February 2023

DOI : https://doi.org/10.1186/s13063-023-07159-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Informed consent
  • Clinical trials
  • Underserved populations

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

visit types in clinical trials

visit types in clinical trials

Home | Understanding the 3 Types of Clinical Trial Monitoring

Understanding the 3 Types of Clinical Trial Monitoring

Clinical trials continue to evolve and so do the methodologies used to support and provide the vital clinical trial monitoring necessary to protect patient safety. The methods, techniques, and strategies used in our field change with the technology at hand, new regulations, and other challenges that may be present. In 2020, the biggest influence on the way our industry operates has been COVID-19. While the pandemic has limited the industry’s ability to access our investigative sites and perform onsite monitoring, the coronavirus has expedited the rise and use of – remote monitoring and centralized monitoring.

The 3 Types of Clinical Monitoring

The concept of monitoring patients/subjects is not new to a veteran researcher like yourself. However, as with many concepts, the types of clinical trial monitoring techniques can sometimes get muddled or potentially made more complex, and this confusion normally persists with the understanding of remote monitoring versus centralized monitoring.

In a recent  webinar from DIA on COVID-19, Alyson Karesh  from the Food and Drug Administration (FDA) outlined the types of clinical monitoring explained in a discussion guide from the FDA and Duke-Margolis workshop held in July 2019. Please note these are Alyson Karesh’s definitions and do not necessarily reflect the official position of the FDA.

Ms. Karesh’s role at the time of this Webinar July 28, 2020 with FDA was Director, Division of Clinical Trial Quality, Office of Medical Policy, Food & Drug Administration (FDA) and her definitions are as follows:

On-site Monitoring

On-site monitoring involves in-person evaluation carried out by sponsor personnel or representatives at the investigation site.

Remote Monitoring

Remote monitoring involves off-site evaluation performed by the monitor away from the site at which the clinical investigation is being conducted.

Centralized Monitoring

Centralized monitoring involves analytical evaluation carried out by sponsor personnel or representatives at a central location other than the site at which the clinical investigation is being conducted.

The Added Nuances of Risk-Based Monitoring

One cannot talk about remote and centralized monitoring without bringing up Risk-Based Monitoring (RBM). It is a critical component of the lexicon of modern monitoring topics. Karesh defines RBM as “monitoring that focuses resources and oversight on:

  • important and likely risks to investigation quality
  • on risks that may be less likely to occur but that could have a significant impact on the overall quality of the investigation.”

By Karesh’s definition, risk-based monitoring is designed to identify “risks to human subject protections and data integrity,” and it is part of any “risk-based quality management system.”  

Risk-Based Monitoring

Taking a risk-based approach to study quality and the monitoring of any clinical investigation has a simplistic focus:

  • Identify potential threats
  • Design a plan to monitor those activities
  • Adjust monitoring methodology as needed

However, RBM is a very nuanced approach. Any monitoring technique needs to be tailored to the risks identified by the clinician, CRO, and/or Sponsor. While critical risks could be monitored any number of ways, including the types outlined above – On-site, Remote, and Centralized – the end effort is more often a combination of these methodologies. All three clinical trial monitoring methods can be used in concert for effective oversight, monitoring and for the protection of data integrity and patient safety.

FDA Guidance on RBM

The FDA has issued three guidances on the use of risk-based monitoring in recent years. The first is called “ Final Guidance: Oversight of Clinical Investigations – A Risk-Based Approach to Monitoring. ” It was released in 2013. This guidance centers on sponsor oversight and study conduct. The end aim is to improve participant protections while improving data integrity.

In 2019, the FDA released “ Draft Guidance: A Risk-Based Approach to Monitoring – Questions and Answers. ” As the title suggests, this document expands on the previous recommendations the FDA outlined in its 2013 guidance on the subject. Specifically, this guidance covers how sponsors, CROs, and researchers can develop a monitoring approach. It involves the development of monitoring plans as well as how to communicate results.

The most recent guidance from the FDA regarding risk-based monitoring came out in March 2020. It is called “ FDA Guidance on Conduct of Clinical Trials of Medical Products During COVID-19 Public Health Emergency, ” and it has gone through several updates since its initial release. This guidance helps sponsors and CROs navigate trial risks and patient safety during the COVID-19 pandemic.

EMA Guidance on Risk-Based Monitoring

In recent years, the European Medicines Agency (EMA) has also issued a series of guidances and papers on risk-based quality management. In 2013, the agency released “ Reflection paper on risk based quality management in clinical trials. ” In that document, the EMA refers to monitoring as a function of Good Clinical Practices (GCP). They describe quality controls as being a potential part of a centralized approach to ensure that submitted documents and clinical data checks are in order. This document is the equivalent of the FDA’s present position on Risk Based Monitoring.

In April 2020, the EMA published “ Guidance on the Management of Clinical Trials During the COVID-19 (Coronavirus) Pandemic. ” The document explains that the EMA and the Heads of Medicines Agencies (HMA) will consider remote site data verification (SDV) but only for those trials that are related to COVID or tied to a pivotal trial that treats serious illnesses for which an unmet medical need exists within that indication or community.

Monitoring During COVID

The COVID pandemic limited the ability for patients as well as Sponsors, CROs, and researchers to access sites. The EMA’s April 2020 document as well as the FDA’s 2020 guidance each speak to this reality. Remote access became critical for continuing clinical trials that were in progress and preserving subject safety throughout this difficult time. Central monitoring has helped analyze study data from afar while remote monitoring helped replace on-site visits when they were impossible. After COVID-19 is hopefully mitigated through forthcoming vaccines and treatments being developed, we fully expect remote monitoring and centralized monitoring to become an ever-present component of clinical trial conduct moving forward. As always, the way our industry conducts clinical trials is ever evolving in new and different ways.

More Articles in Our Clinical Trial Monitoring Series

Over the coming weeks, Alllucent will take a deeper dive into the different clinical trial monitoring techniques and the technologies used to support them. As we release these updates, we will add links here to their location on our blog for ease of navigation and discovery. We hope you find these articles interesting and beneficial to your efforts in protecting data quality and patient safety in your clinical trials.  Allucent, as a global award-winning CRO, is dedicated to creating a healthier world for all.  Stay tuned for future updates in this series!

Dartmouth researchers look to meld therapy apps with modern AI 

An experimental, artificial intelligence-powered therapeutic app that its creators hope will drastically improve access to mental health care began its first clinical trial last month.

Therabot, a text-based AI app in development at Dartmouth College, launched in a clinical trial in March with 210 participants. In its conversations with users, the app uses generative AI, the same technology that powers OpenAI’s ChatGPT, to come up with answers and responses. The app also uses a form of AI that learns patterns and has been designed to enable Therabot to get to know and remember a user and provide personalized advice or recommendations based on what it has learned.

There are already a handful of script-based therapy apps and broader “wellness” apps that use AI, but Therabot’s creators say theirs would be the first clinically tested app powered entirely by generative AI that has been specifically designed for digital therapy. 

Woebot, a mental health app that says it has served 1.5 million people worldwide, launched in 2017 in collaboration with interventional scientists and clinicians. Wysa, another popular AI therapy app, in 2022 received a Food and Drug Administration Breakthrough Device designation , a voluntary program designed to speed up development, assessment and review of a new technology. But these apps generally rely on rules-based AI with preapproved scripts.

Nicholas Jacobson, an assistant professor at Dartmouth College and a clinically trained psychologist, spearheaded the development of Therabot. His team has been building and finessing the AI program for nearly five years, working to ensure responses are safe and responsible. 

Therabot uses generative AI to engage with users dealing with anxiety or depression as well as users predisposed to eating disorders.

“We had to develop something that really is trained in the broad repertoire that a real therapist would be, which is a lot of different content areas. Thinking about all of the common mental health problems that folks might manifest and be ready to treat those,” Jacobson said. “That is why it took so long. There are a lot of things people experience.”

The team first trained Therabot on data derived from online peer support forums, such as cancer support pages. But Therabot initially replied by reinforcing the difficulty of daily life. They then turned to traditional psychotherapist training videos and scripts. Based on that data, Therabot’s replies leaned heavily on stereotypical therapy tropes like “go on” and “mhmm.” 

The team ultimately pivoted to a more creative approach: writing their own hypothetical therapy transcripts that reflected productive therapy sessions, and training the model on that in-house data. 

Jacobson estimated that more than 95% of Therabot’s replies now match that “gold standard,” but the team has spent the better part of two years finessing deviant responses.

“It could say anything. It really could, and we want it to say certain things and we’ve trained it to act in certain ways. But there’s ways that this could certainly go off the rails,” Jacobson said. “We’ve been essentially patching all of the holes that we’ve been systematically trying to probe for. Once we got to the point where we were not seeing any more major holes, that’s when we finally felt like it was ready for a release within a randomized controlled trial.”

The dangers of digital therapeutic apps have been subject to intense debate in recent years, especially because of those edge cases. AI-based apps in particular have been scrutinized.

Last year, the National Eating Disorders Association pulled Tessa , an AI-powered chatbot designed to provide support for people with eating disorders. Although the app was designed to be rules-based, users reported receiving advice from the chatbot on how to count calories and restrict their diets. 

“If [users] get the wrong messages, that could lead to even more mental health problems and disability in the future,” said Vaile Wright, senior director of the Office of Health Care Innovation at the American Psychological Association. “That frightens me as a provider.”

With recruitment for Therabot’s trial now complete, the research team is reviewing every one of the chatbot’s replies, monitoring for deviant responses. The replies are stored on servers compliant with health privacy laws. Jacobson said his team has been impressed with the results so far.

“We’ve heard ‘I love you, Therabot’ multiple times already,” Jacobson said. “People are engaging with it at times that I would never respond if I were engaging with clients. They’re engaging with it at 3 a.m. when they can’t sleep, and it responds immediately.”

In that sense, the team behind Therabot says, the app could expand access and availability rather than replacing human therapists.

Jacobson believes that generative AI apps like Therabot could play a role in combating the mental health crisis in the United States. The nonprofit Mental Health America estimates that more than 28 million Americans have a mental health condition but do not receive treatment, and 122 million people in the U.S. live in federally designated mental health shortage areas, according to the Health Resources and Services Administration .

“No matter what we do, we will never have a sufficient workforce to meet the demand for mental health care,” Wright said. 

“There needs to be multiple solutions, and one of those is clearly going to be technology,” she added.

During a demonstration for NBC News, Therabot validated feelings of anxiety and nervousness before a hypothetical big exam, then offered techniques to mitigate that anxiety custom to the user’s worries about the test. In another case, when asked for advice on combating pre-party nerves, Therabot encouraged the user to try imaginal exposure, a technique to alleviate anxiety that involves envisioning participating in an activity before doing it in real life. Jacobson noted this is a common therapeutic treatment for anxiety.

Other responses were mixed. When asked for advice about a breakup, Therabot warned that crying and eating chocolate might provide temporary comfort but would “weaken you in the long run.”

With eight weeks left in the clinical trial, Jacobson said that the smartphone app could be poised for additional trials soon and then broader open enrollment by the end of the year if all goes well. Beyond other apps essentially repurposing ChatGPT, Jacobson believes this would be a first-of-its-kind generative AI digital therapeutic tool. The team ultimately hopes to gain FDA approval. The FDA said in an email that it has not approved any generative AI app or device. 

With the explosion of ChatGPT’s popularity, some people online have taken to testing the generative AI app’s therapeutic skills, even though it was not designed to provide that support. 

Daniel Toker, a neuroscience student at UCLA, has been using ChatGPT to supplement his regular therapy sessions for more than a year. He said his initial experiences with traditional therapy AI chatbots were less helpful.

“It seems to know what I need to hear sometimes. If I have a challenging thing that I’m going through or a challenging emotion, it knows what words to say to validate how I’m feeling,” Toker said. “And it does it in a way that an intelligent human would,” he added.

He posted on Instagram in February about his experiences and said he was surprised by the number of responses.

On message forums like Reddit, users also offer advice on how to use ChatGPT as a therapist. One safety employee at OpenAI, which owns ChatGPT, posted on X last year how impressed she was by the generative AI tool’s warmth and listening skills.

“For these particularly vulnerable interactions, we trained the AI system to provide general guidance to the user to seek help. ChatGPT is not a replacement for mental health treatment, and we encourage users to seek support from professionals,” OpenAI said in a statement to NBC News.

Experts warn that ChatGPT could provide inaccurate information or bad advice when treated like a therapist. Generative AI tools like ChatGPT are not regulated by the FDA since they are not therapeutic tools.

“The fact that consumers don’t understand that this isn’t a good replacement is part of the problem and why we need more regulation,” Wright said. “Nobody can track what they’re saying or what they’re doing and if they’re making false claims or if they’re selling your data without your knowledge.”

Toker said the personal benefits from his experience with ChatGPT outweigh the cons.

“If some employee at OpenAI happens to read about my random anxieties, that doesn’t bother me,” Toker said. “It’s been helpful for me.”

Andy Weir is an associate producer for NBC News.

Erin McLaughlin is an NBC News correspondent.

visit types in clinical trials

Shanshan Dong is a producer for NBC News in Los Angeles. 

COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK

Parliament, Office Building, Building, Architecture, Urban, Postal Office, Grass, Plant, City, Town

Clinical Research Coordinator

  • Ophthalmology
  • Columbia University Medical Center
  • Opening on: Apr 23 2024
  • Job Type: Officer of Administration
  • Regular/Temporary: Regular
  • Hours Per Week: 35
  • Salary Range: $62,400 - $65,000

Position Summary

Under the direction of the Director of the Clinical Trials Unit (CTU) and Principal Investigators, the Clinical Research Coordinator will conduct clinical research studies (industry-sponsored and investigator-initiated) within the Columbia University Irving Medical Center (CUIMC) Department of Ophthalmology in adherence with assigned study protocols and manuals of operation and in accordance with clinical research principles.

Responsibilities

  • Serve as the contact person for those interested in study participation and assist with recruitment activities including pre-screening electronic medical records for eligibility, contacting potential subjects, explaining all study procedures, and consenting eligible subjects or assenting parents or guardians for children enrolled in research studies.
  • Coordinate day-to-day aspects of study related procedures, including, but not limited to scheduling visits and procedures, data entry, preparing for research visits, research visit documentation, maintenance of regulatory binders and study files, creation and/or maintenance of source documentation, preparation for monitoring visits, site initiation/closeout visits and audits as needed.
  • Be able to coordinate and perform research testing and imaging for clinical research studies including but not limited to visual acuity, refraction, dark adaptation, visual field, microperimetry, fluorescein angiography, fundus photography, optical coherence tomography (OCT), ICG angiography, slit lamp photography, MP1, corneal mapping, specular biomicroscopy including confocal imaging, HRT Analyzer (glaucoma), and ERGs.
  • Be able to administer surveys, such as the National Eye Institute Vision Function Questionnaire (NEI-VFQ-25), EuroQOL-5 Dimension, Reading speed, Health Utilities Index.
  • Work with the research team and ocular photography department to ensure that all required eye exams and ocular testing are scheduled and completed according to protocol.
  • Obtain and maintain study certifications for ETDRS, OCT, and photography for clinical trials.
  • Obtain access to sponsors’ electronic data capture (EDC) systems, complete EDC trainings, and enter data into the EDC within 5 days of seeing the study patient.
  • Maintain and organize study-related documentation and records using the EDC platforms, including capturing adverse events and serious adverse events and preparing for monitoring visits.
  • Respond to all sponsor-related queries in a timely manner.
  • Ensure that all aspects of Good Clinical Practice are followed at all times by developing and ensuring adherence with Standard Operating Procedure (SOP) for clinical studies being conducted in the Ophthalmology Clinical Trials Unit.
  • Work with the Regulatory Manager to gain CUIMC Institutional Review Board (IRB) approval in a timely manner by creating informed consent forms using sponsors’ templates, responding to IRB correspondents, submitting amendments, renewals, modifications, and other regulatory documents required by the sponsor and FDA, including progress reports.
  • Ensure that all appropriate Institutional, State, and Federal regulations are followed throughout the course of the study according to study-related protocols and manuals.
  • Work directly with sponsors’ designated Clinical Research Organizations (CRO) to complete all required study start-up documents including FDA 1572 forms, investigator signatures, CVs, medical licenses, Conflict of Interest, HIPAA, and Human Subjects Trainings in a timely manner.
  • Complete feasibility forms requested by sponsors in a timely manner to assess ophthalmic equipment and examination rooms to conduct the studies.

Minimum Qualifications

  • Bachelor’s degree or equivalent in education and experience, plus minimum of 1 to 2 years of related experience.
  • Conform to all applicable HIPAA, billing compliance and safety requirements.
  • Must be able to work effectively with minimal supervision.
  • Prior research experience to include recruiting study participants, conducting standardized protocol visits and data entry.
  • Excellent verbal and written communication skills and attention to detail required.
  • Computer skills (Word, Excel) required.
  • Excellent interpersonal skills.
  • Willingness to travel to different sites.

Preferred Qualifications

  • Working knowledge of Spanish
  • Phlebotomy license
  • Prior experience in ophthalmology

Equal Opportunity Employer / Disability / Veteran

Columbia University is committed to the hiring of qualified local residents.

Commitment to Diversity 

Columbia university is dedicated to increasing diversity in its workforce, its student body, and its educational programs. achieving continued academic excellence and creating a vibrant university community require nothing less. in fulfilling its mission to advance diversity at the university, columbia seeks to hire, retain, and promote exceptionally talented individuals from diverse backgrounds.  , share this job.

Thank you - we'll send an email shortly.

Other Recently Posted Jobs

Assistant Director of Strategic Partnerships

Assistant manager, assistant director-human resources.

Refer someone to this job

visit types in clinical trials

  • ©2022 Columbia University
  • Accessibility
  • Administrator Log in

Wait! Before you go, are you interested in a career at Columbia University? Sign up here! 

Thank you, for sharing your information. A member of our team will reach out to you soon!

Columbia University logo

This website uses cookies as well as similar tools and technologies to understand visitors' experiences. By continuing to use this website, you consent to Columbia University's usage of cookies and similar technologies, in accordance with the Columbia University Website Cookie Notice .

  • Skip to main content
  • Skip to FDA Search
  • Skip to in this section menu
  • Skip to footer links

U.S. flag

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

U.S. Food and Drug Administration

  •   Search
  •   Menu
  • FDA Organization
  • Center for Drug Evaluation and Research | CDER

CDER Center for Clinical Trial Innovation (C3TI)

Streamlined Trials Embedded in clinical Practice (STEP) Demonstration Project

C3TI aims to promote the adoption of pragmatic design elements to integrate randomized trials into clinical practice and improve coordination and collaboration between CDER and sponsors to effectively support these innovative trials. To this point, CDER will partner with sponsors on trials that include limited procedures outside of routine clinical care, decentralization of procedures that can be done outside of designated research sites, the use of real-world data to obtain outcomes, and, where appropriate, the integration into point-of-care practice. These types of trials are advantageous as they can be more resource-efficient, able to better attract broader study populations, can be completed more rapidly, and yet still robustly assess study objectives.

With the Streamlined Trials Embedded in clinical Practice (STEP) demonstration project, C3TI looks to partner with sponsors planning pragmatic/point-of-care trials to provide an opportunity to address and resolve issues (e.g., statistical analyses, incorporation of real-world data and evidence, trial endpoint selection, inspectional approaches) around trial design and conduct, the lessons learned from which will be made available broadly and used to inform updates to relevant CDER guidance.

Benefits of participating

By participating, sponsor(s) would receive additional CDER engagement support for trial design and implementation aspects, which includes leaders across several CDER offices (e.g., Office of Medical Policy, Office of New Drugs, Office of Translational Sciences). Engagement may include access to additional coordination support with CDER subject matter experts and an inspection process that is fit-for-purpose for the innovative design (i.e., focused on a quality by design approach).

Eligibility Criteria for STEP Demonstration Project Proposals

  • The sponsor has an active pre-Investigational New Drug (IND) or IND for the product(s) included in the proposal.
  • The trial incorporates pragmatic design elements that are reflective of routine clinical practice to improve trial efficiency and enhance patient centricity while maintaining patient safety and data integrity. Examples of pragmatic design elements include (but are not limited to) broad eligibility criteria, limited visits and procedures outside of what might occur in routine care, including incorporation of decentralized procedures, and limited safety data collection consistent with ICH E19.
  • Trial later in pre-market development, when the safety profile is reasonably well-defined.
  • Post-approval trial (either that the sponsor initiated or in response to a post-marketing requirement), wherein the population, trial procedures, and endpoints can all be appropriately incorporated into a large simple trial. 

Of note, trials with narrow entry criteria, complex procedures, complex drug administration, or challenging endpoint collection will likely NOT be appropriate.

  • Sponsors participating in demonstration projects will be expected to share select details of their clinical trials and the implementation of clinical trial innovations as they progress, starting as early as the finalization of study design. This sharing may include updates, lessons learned, and relevant insights gathered during the trial. It is understood that these shared details will reflect general principles and innovative aspects, while maintaining the necessary confidentiality of proprietary or sensitive information.

Instructions on how to submit a proposal can be found on the C3TI Demonstration Program Proposal Submission webpage.

IMAGES

  1. Cancer Clinical Trials

    visit types in clinical trials

  2. Understanding Clinical Trials

    visit types in clinical trials

  3. Fundamentals of Clinical Trials

    visit types in clinical trials

  4. What are clinical trials? Definition and examples

    visit types in clinical trials

  5. Different Ways to Monitor Clinical Trials

    visit types in clinical trials

  6. The Different Types Of Patient Study Visits In Clinical Research

    visit types in clinical trials

VIDEO

  1. Editing Soap Notes and Visit Types inside the soap note

  2. Clinical Trials

  3. Pleural Effusion

  4. New Elvive Full Resist Serum

  5. The Basics of Clinical Trials

  6. Blood Cancer UK: Wishing You A Merry Christmas 🎄

COMMENTS

  1. Monitoring Visit Reports: An Overview of Different Types of Visits

    Each monitoring visit is an integral part of the clinical research process. In conclusion, the different types of monitoring visits, PSVs, SIVs, IMVs, and COVs, are designed to ensure that the study is being conducted in a way that the subjects are safe and the data is valid. The CRA must ensure that sites are performing the study in compliance ...

  2. Basics About Clinical Trials

    Clinical trials are conducted for many reasons: to determine whether a new drug or device is safe and effective for people to use. to study different ways to use standard treatments or current ...

  3. The Different Types Of Patient Study Visits In Clinical Research

    Although every clinical trial is designed with different types of study visits, all sponsored studies follow the same GCP guidelines when conducting research. This means that an Informed Consent Form and an explanation of the study must be provided to participants prior to any study related procedures which usually begin at the screening visit.

  4. 8. Essential documents for the conduct of a clinical trial: ICH E6 (R2

    The storage system used during the trial and for archiving (irrespective of the type of media used) should provide for document identification, version history, search, and retrieval. Essential documents for the trial should be supplemented or may be reduced where justified (in advance of trial initiation) based on the importance and relevance ...

  5. The Basics

    Clinical research is medical research that involves people like you. When you volunteer to take part in clinical research, you help doctors and researchers learn more about disease and improve health care for people in the future. Clinical research includes all research that involves people. Types of clinical research include:

  6. Site Initiation Visit (SIV): Clinical Trial Basics

    An SIV (clinical trial site initiation visit) is a preliminary inspection of the trial site by the sponsor before the enrollment and screening process begins at that site. It is generally conducted by a monitor or clinical research associate (CRA), who reviews all aspects of the trial with the site staff, including going through protocol ...

  7. PDF Monitoring & Auditing of Clinical Trials

    for the routine evaluation (i.e, an audit) of a clinical trial. At the conclusion of this module you will be able to: • Describe the purposes and regulations related to monitoring of clinical trials. • Discuss the difference between monitoring and auditing. • Describe three types of sponsored study visits.

  8. Clinical Trials and Clinical Research: A Comprehensive Review

    There are different types of clinical trials that include those which are conducted for treatment, prevention, early detection/screening, and diagnosis. ... and benefit to the institution and the investigator is done by the sponsor during the site selection visit. The standards of a clinical research trial are ensured by the Council for ...

  9. What Are the Different Types of Clinical Research?

    The researchers evaluate the treatment's safety, determine a safe dosage range, and identify side effects. Phase II trials The experimental drug or treatment is given to a larger group of people ...

  10. Monitoring clinical trials: a practical guide

    Building on the principles of the Declaration of Helsinki, GCP is the guideline on which all other refer-ences are based and is considered the gold standard for clinical trials globally [1]. Monitoring is an integral part of GCP and ensures that a trial is conducted in compli-ance with international regulations, standards and guide-lines [1].

  11. Types of Clinical Studies (and Clinical Trials)

    Pilot studies are small, early-stage studies that help in the planning and adjusting of a larger clinical trial. These studies are conducted before the main study to analyze the study design's validity and help answer some research questions. Results from pilot studies are sometimes reported in the results of the larger clinical trial.

  12. Screening and Preparing for a Study Visit

    Prior to each study visit, the research team must be prepared for all known and unknown tasks that may need to be completed per protocol. If applicable, physician orders need to be completed and authorized for lab draws, study medication and additional testing; research lab kits should be prepared and available to the appropriate clinical team ...

  13. Basics of case report form designing in clinical research

    A case report form (CRF) is designed to collect the patient data in a clinical trial; its development represents a significant part of the clinical trial and can affect study success. [ 1] Site personnel capture the subject's data on the CRF, which is collected during their participation in a clinical trial.

  14. Clinical trials: design, endpoints and interpretation of outcomes

    Table 2 Common types of data used in transplant clinical trial endpoints. ... Clinical trials are expensive and time-consuming and it can be disappointing to complete a trial and conclude there ...

  15. Clinical Research Trials and You: Questions and Answers

    A clinical trial is a research study that involves people like you. Researchers conduct clinical trials to find new or better ways to prevent, detect, or treat health conditions. Often, researchers want to find out if a new test, treatment, or preventive measure is safe and effective. Tests can include ways to screen for, diagnose, or prevent a ...

  16. Monitoring strategies for clinical intervention studies

    New monitoring strategies for clinical trials. Our question. We reviewed the evidence on the effects of new monitoring strategies on monitoring findings, participant recruitment, participant follow‐up, and resource use in clinical trials. ... and independent review of all major and critical findings which was blind to visit type ...

  17. Types and Phases of Clinical Trials

    Phase I trials usually include a small number of people (up to a few dozen). Phase I trials most often include people with different types of cancer. These studies are usually done in major cancer centers. Phase I trials carry the most potential risk. But phase I studies do help some patients.

  18. Home

    This is the classic website, which will be retired eventually. Please visit the modernized ClinicalTrials.gov instead ... ClinicalTrials.gov is a registry and results database of publicly and privately supported clinical studies of human participants conducted around the world. Explore 492,037 research studies in all 50 states and in 223 ...

  19. Learn About Clinical Studies

    A clinical study involves research using human volunteers (also called participants) that is intended to add to medical knowledge. There are two main types of clinical studies: clinical trials (also called interventional studies) and observational studies. ClinicalTrials.gov includes both interventional and observational studies.

  20. Tools for Resolving Monitoring Visit Findings

    Monitoring visits are daunting not only for monitors but also for study sites. When monitors review data for a new study at a site they have recently become acquainted with, they spend a significant amount of effort investigating common study-related issues, how the site operates and uncovering many findings regarding data quality and site operations.

  21. Your guide to understanding phases of cancer clinical trials

    Phase 1. In this initial step of a clinical trial, the research staff is looking at the safety and appropriate dosage of giving a new treatment. They also watch for side effects. The number of patients enrolled on phase I trials are small — there are typically fewer than 50 patients involved. Kotila often hears patients ask if phase 1 trials ...

  22. Prospective Patients

    A screening visit is a potential participant's chance to meet the DTC team and discuss their options, questions, and concerns with the study team. If you would like to participate in a study, you may also be asked to sign a Screening Consent form and complete screening/baseline examinations. These examinations, tests, or procedures are part of ...

  23. Clinical Trials: What Patients Need to Know

    Learn more about clinical trials and find a trial that might be right for you. Clinical trials are voluntary research studies conducted in people and designed to answer specific questions about ...

  24. Complex and alternate consent pathways in clinical trials

    For clinical trials, the alternative decision-maker is termed a legal representative and provides consent based on the person's presumed will , and for other types of research, they act as a consultee and are asked to provide advice about participation based on the person's wishes and preferences .

  25. Understanding the 3 Types of Clinical Trial Monitoring

    While critical risks could be monitored any number of ways, including the types outlined above - On-site, Remote, and Centralized - the end effort is more often a combination of these methodologies. All three clinical trial monitoring methods can be used in concert for effective oversight, monitoring and for the protection of data integrity ...

  26. Oncology: Cancer Drug Pipeline and Clinical Trials

    Oncology Drug Pipeline & Cancer Clinical Trials. Pfizer Oncology is committed to discovering, investigating, and developing transformative therapies that improve the outlook for cancer patients worldwide.. Our strong pipeline—one of the most robust in the industry—includes biologics, small molecules, immunotherapies, and biosimilars, and is centered on exploring a wide array of approaches ...

  27. Dartmouth researchers look to meld therapy apps with modern AI

    Therabot, currently in its first clinical trial, uses generative AI trained on therapy scripts in an effort to create technology that bring mental health services to underserved populations.

  28. Clinical Research Coordinator

    Job Type: Officer of Administration Regular/Temporary: Regular Hours Per Week: 35 Salary Range: $62,400 - $65,000 The salary of the finalist selected for this role will be set based on a variety of factors, including but not limited to departmental budgets, qualifications, experience, education, licenses, specialty, and training. The above hiring range represents the University's good faith ...

  29. PDF FACT SHEET: U.S. Department of Education's 2024 Title IX Final Rule

    On April 19, 2024, the U.S. Department of Education released its final rule to fully effectuate Title IX's promise that no person experiences sex discrimination in federally funded education. Before issuing the proposed regulations, the Department received feedback on its Title IX regulations, as amended in 2020, from a wide variety of ...

  30. Streamlined Trials Embedded in clinical Practice (STEP) Demonstration

    These types of trials are advantageous as they can be more resource-efficient, able to better attract broader study populations, can be completed more rapidly, and yet still robustly assess study ...