Meta-analysis is a systematic pooling of the results of 2 or more studies to address a question of interest or hypothesis. According to Moher and Olkin,13
[Meta-analyses] provide a systematic and explicit method for synthesizing evidence, a quantitative overall estimate (and confidence intervals) derived from the individual studies, and early evidence as to the effectiveness of treatments, thus reducing the need for continued study. They also can address questions in specific subgroups that individual studies may not have examined.
A meta-analysis quantitatively summarizes the evidence regarding a treatment, procedure, or association. It is a more statistically powerful test of the null hypothesis than is provided by the separate studies themselves because the sample size is substantially larger than those in the individual studies. However, a number of issues make meta-analysis a much-debated form of analysis.14-19 To help standardize the presentation of meta-analysis, JAMA recommends use of the QUOROM flow diagram and checklist (http://www.consort-statement.org/QUOROM.pdf) for reporting meta-analyses of RCTs, and the MOOSE checklist (http://www.consort-statement.org/Initiatives/MOOSE/moose.pdf) for reporting meta-analyses of observational studies.
To ensure that the meta-analysis accurately reflects the available evidence, the methods of identifying possible studies for inclusion should be explicitly stated (eg, literature search, reference search, and contacting authors regarding other or unpublished work). Authors should state the dates that their search covered and the search terms used. A search strategy that includes several approaches to identify articles is preferable to a single database search.20 Authors should make all attempts to include results of non–English-language articles.
Publication bias, or the tendency of authors and journals to publish articles with positive results, is a potential limitation of any systematic review of the literature.21 Unpublished studies may be included in a meta-analyses if they meet predefined inclusion criteria. One approach to addressing whether publication bias might affect the result is to define the number of negative studies that would be needed to change the results of a meta-analysis from positive to negative. Authors may also provide funnel plots.
To address the problem of publication bias, the ICMJE now requires, as a condition of publication, that a clinical trial be registered in a public trials registry.9 The ICMJE policy applies to any clinical trial starting enrollment after July 1, 2005. For trials that began enrollment prior to this date, the ICMJE member journals required registration by September 13, 2005. The policy defines a clinical trial as “any research project that prospectively assigns human subjects to intervention or comparison groups to study the cause-and-effect relationship between a medical intervention and a health outcome.”
Other controversial issues include which study designs are acceptable for inclusion, whether and how studies should be rated for quality,22 and whether and how to combine results from studies with disparate study characteristics. While few would disagree that meta-analysis of RCTs is most appropriate when possible, many topics include too few randomized trials to permit meta-analysis or cannot be studied in a trial.
Gerbarg and Horwitz23 have suggested that criteria for combining studies should be similar to those for multicenter trials and should include similar prognostic factors, which would justify combining them. Whether studies can be appropriately combined can be determined statistically by analyzing the degree of heterogeneity (ie, the variability in outcomes across studies). Assessment of heterogeneity includes examining the effect size, the sample size in each group, and whether the effect sizes from different studies are homogeneous. If statistically significant heterogeneity is found, then combining the studies into a single analysis may not be valid.24 Another concern is the influence a small number of large trials may have on the results; large trials in a small pool of studies can dominate the analysis, and the meta-analysis may reflect little more than the individual large trial. In such cases, it may be appropriate to perform sensitivity analyses comparing results with and without inclusion of the large trial(s).
Meta-analyses are often analyzed by means of both fixed-effects and random-effects models to determine how different assumptions affect the results. An example of how results of a meta-analysis may be depicted graphically is shown in 4.2.2, Visual Presentation of Data, Figures, Diagrams (Example F13). The more conservative random-effects model is generally preferred.
A meta-analysis is useful only as long as it reflects current literature. Thus, a concern of meta-analysts and clinicians is that the meta-analyses should be updated as new studies are published. One international effort, the Cochrane Collaboration, publishes and frequently updates a large number of systematic reviews and meta-analyses on a variety of topics.25