Editors' ChoiceEVIDENCE BASE

Meta-Analyze This

See allHide authors and affiliations

Science Translational Medicine  27 Aug 2014:
Vol. 6, Issue 251, pp. 251ec147
DOI: 10.1126/scitranslmed.3010127

A meta-analysis of well-conducted, randomized controlled trials (RCTs) is considered by many to be the strongest evidence with which to justify implementation of new biomedical science into clinical practice (that is, level 1a evidence). Although a double-blind, placebo-controlled RCT remains the gold standard, it is a “quantized unit” of evidence (that is, level 1b) and thus confronts inescapable biases that arise, for example, from specific traits of trial design or patient population selection. Although meta-analyses help overcome single-trial limitations, they themselves face the challenge of considering which trials to incorporate in the study. Even a good meta-analysis of bad data will likely be flawed. To get the best estimate of an authentic treatment response, is it best to consider all trials, however small, biased, and imprecise? Or is it better to consider only the largest trials, those that display the lowest outcome variances, or those that are the least biased? Now, Dechartres et al. take on this open question, performing a meta-analysis of meta-analyses.

Drawing from top, high-impact medical journals and the Cochrane Database of Systematic Reviews, the investigators demonstrated by example the fickleness of meta-analyses and the need for more robust approaches. To this end, the authors considered 163 meta-analyses drawing from 1243 distinct RCTs. Each RCT described the benefits of a given medical treatment versus a control, with outcomes expressed as an odds ratio (OR). By standardizing and collating ORs for all RCTs, the authors re–meta-analyzed the trials, constructing sets of meta-analyses from five distinct RCT selection criteria: all trials, the largest 25% of trials, the single most precise trial, the trial with the least bias, or a relatively new limit meta-analyses approach that seeks to estimate the effect of an infinitely large trial. Each analysis resulted in a pooled treatment OR, and the OR collection was then used to compute a ratio of ORs (RORs) comparing the five trial selection criteria. Using these RORs, the authors demonstrated that even when drawing from RCTs of putative quality, significant and systematic errors were introduced on the basis of trial selection criteria—an instability that resulted in major misinterpretations in as many as 30 to 67% of cases. The work is punctuated with examples of new treatments being perceived as effective or not—for example, a coronary stenting strategy that went from an OR of 0.77, suggesting benefit, to 1.37, suggesting harm.

Translation is challenged on one side by the rigors of fundamental research, on which its foundation lies, and on the other side by the gauntlet of clinical investigations through which biomedical science must pass before it has the potential to improve clinical medicine. As innovations are shepherded from the lab to the clinic, it is essential to appreciate the role of evidence-based medicine in informing us of how a treatment will perform in people, thus justifying the therapy’s use. However, it is also humbling to recognize that this evidence, however close to the truth, is not true. In fact, it is sometimes quite far off.

A. Dechartres et al., Association between analytic strategy and estimates of treatment outcomes in meta-analyses. JAMA 312, 623–630 (2014). [Abstract]

Navigate This Article