Please login to the form below

Not currently logged in

The gold standard of scientific evidence

Will randomised controlled trials become a thing of the past?


Randomised controlled trials (RCTs) are regarded as the gold standard of scientific evidence, and for good reason. By randomising a treatment across study arms, RCTs eliminate patient-treatment selection bias, resulting in reliable causal inference. In contrast, in the real world patients who are sicker may be more (or less) likely to receive certain treatments. Because treatments are given selectively, the true causal effect of a treatment on patient health in most real-world studies cannot be determined. Because of the perceived quality of clinical trials data, RCT data is used in most health technology assessment (HTA) appraisals around the world; in 2015, 90% of CADTH appraisals, 80% of PBAC appraisals, 64% of NICE appraisals and all IQWiG appraisals relied on RCT-based evidence.

Nevertheless, in recent years there has been a call to increase the use of real-world evidence (RWE). In the United States last summer, the FDA released a guidance document on the use of RWE to support regulatory decision-making for medical devices. The FDA already uses real-world data to measure pharmaceutical safety as part of its Sentinel Initiative. Further, the International Society for Pharmaco-economics and Outcomes Research (ISPOR) recommends the use of RWE to support coverage and payment decisions.

Limitations of randomised controlled trials

Although RCTs are scientifically credible and feasible to conduct in the majority of cases, they have at least three appreciable shortcomings: scientific limitations, practicality and cost. Scientific limitations start with the RCT patient population itself, which may be restrictive and not representative of the patient population likely to use a treatment in the real world. In addition, healthcare practices participating in RCTs treat patients according to detailed protocols, making the care received in clinical trials unlikely to reflect what would be received in the real world. Furthermore, the outcomes of interest of an RCT may not be those of interest to patients. For instance, oncology trials often use surrogate outcomes, such as progression-free survival, rather than overall survival to measure efficacy. Research has shown that surrogate outcomes such as PFS have real-world survival gains that are generally 16% less than what is predicted by clinical trials. Traditionally, few RCTs would collect patient-reported outcomes, although that is beginning to change.

Even if these scientific limitations can be addressed, in some cases use of an RCT is simply not practical. For ultra-rare diseases, for instance, it may be infeasible to recruit a sufficient number of patients to randomise across multiple treatment arms. Consider also the case of a new digital medicine formulated as a chip-embedded pill to monitor adherence. A potential RCT could compare adherence levels and outcomes for patients using this digital medicine against standard of care (SOC). Within a clinical trial setting, however, the mandated additional physician visits and structured clinical protocols would likely result in the SOC arm appearing to be more effective than is the case in the real world; thus, the benefits of remote monitoring of adherence would likely be underestimated within an RCT setting.

RCTs are, of course, costly. The capital needed to bring a drug to market is now estimated to be $2.6bn, with a large share of this cost related to RCTs. Cost has increased over time. According to the Independent Institute, the typical drug in 1980 underwent 30 clinical trials involving about 1,500 patients; by the mid-1990s, the typical drug was subjected to more than 60 clinical trials involving nearly 5,000 patients. In addition, securing regulatory approval for a drug can take decades. Given limited healthcare budgets, it is not feasible to conduct an RCT for every clinical scenario.

The promise and limitations of real-world evidence

While conventional wisdom holds that RWE is not appropriate to measure treatment efficacy for an initial indication for the vast majority of treatments, there are clear cases where RWE would be highly useful. In cases of ultra-rare diseases, RWE can play a pivotal role in approval. Additionally, phase IV trials using RWE can help stakeholders understand a treatment’s long-term efficacy and safety. RWE potentially could also be used to evaluate effectiveness and safety of an already approved treatment for a new indication. More than one in five prescriptions in the United States are for an off-label use; providing additional evidence to better understand treatment efficacy and safety is crucial.

A key limitation of any study using RWE is the inability to statistically identify whether the treatment of interest was the sole factor affecting patient outcomes. However, randomisation can be incorporated into real-world study designs. For instance, pragmatic trial designs blend randomisation with real-world treatment practices. Another valid approach to assess treatment efficacy and safety in a real-world setting is cluster randomisation, which randomises provider sites by intervention or current SOC.

A proposal from Anirban Basu would even use RWE to measure treatment efficacy for granting an initial indication. Dr Basu supports using phase II clinical trials to determine drug safety and then replacing phase III clinical trials with only-in-research (OIR) labelling over a specified period (eg, two years), using RWE to measure efficacy. During the proposed OIR period, only half of adults would be eligible to receive the treatment. Manufacturers would then prepare a protocol to conduct real-world studies on the incremental effectiveness of the use of the treatment relative to the standard of care. By randomising access to the treatment, researchers would be able to measure efficacy using robust statistical methods. While intriguing, the real-world based OIR process is not currently under regulatory consideration.

A path forward

Real-world evidence is already in use around the world. In the UK, NICE tends to support economic arguments in the post-regulatory setting. France uses RWE as part of post-marketing evaluations, for innovative drugs requiring five-year re-evaluation of pricing and reimbursement. The FDA uses RWE to monitor drug safety as part of the Sentinel project. Additionally, RWE is being used to inform

creative drug pricing. When seeking coverage for Kymriah, a new CAR-T therapy, Novartis has an agreement with the Centers for Medicaid and Medicare Services to adjust the price of the treatment based on the health outcomes of real-world patients.

Scientific standards, particularly around transparency, need to be put in place to assure the scientific rigor of any RWE study. For starters, researchers should make available study protocols for all research using real-world data. Posting study protocols online can help prevent data fishing, a phenomenon whereby researchers conduct multiple data analyses and publish only what they determine to be favourable results. Additionally, to the greatest extent possible, study code should be made publicly available online. Use of RWE evidence often requires the use of advanced statistical techniques, such as propensity score matching, instrumental variables and regression discontinuity. Because of the complexities of these statistical approaches, making the underlying real-world data analysis public will be critical for building the credibility of using RWE.

Real-world data analysis is only useful in the presence of high-quality real-world data. Policymakers, industry, patient advocates and others should work together to develop new sources of high-quality real-world data. The creation of robust, detailed patient registries is one attractive data source. Health insurance claims and electronic medical records represent other high-quality sources of data. As healthcare becomes increasingly integrated into patients’ mobile phones, data from remote monitoring can also be useful. Even social media data can be used to measure real-world treatment safety and effectiveness.

In short, RWE presents a fantastic opportunity to accelerate drug approval, monitor treatment safety and appropriately price medications. Realising the opportunities, however, will require investment in new data sources as well as coordination across stakeholders to make RWE study methods more transparent.

Article by
Jason Shafrin

is a director of health economics at Precision Health Economics and the director of research at the Innovation and Value Initiative

18th January 2018

Article by
Jason Shafrin

is a director of health economics at Precision Health Economics and the director of research at the Innovation and Value Initiative

18th January 2018

From: Research



Career advice

No results were found

Subscribe to our email news alerts


Add my company
OPEN Health

OPEN Health brings together deep scientific knowledge, global understanding, and broad specialist expertise to support our clients in improving health...