Review Type

Meta-Analysis Literature Review: A Complete Guide

Meta-analysis methodology from protocol to pooled effect size: PRISMA 2020, Cochrane Handbook, heterogeneity (I-squared), forest plots, and sensitivity analysis.

By Angel Reyes · Last updated

What is a meta-analysis?

A meta-analysis is a quantitative synthesis of the results of multiple studies addressing the same research question. It extracts an effect size from each study — a mean difference, a risk ratio, an odds ratio, a correlation, a standardized mean difference — and pools them into a weighted summary estimate, typically using a fixed-effect or random-effects model. The result is a single estimate with a confidence interval, a forest plot that displays each study's contribution, and a heterogeneity statistic (most commonly I²) that tells the reader how much the true effect varies across studies.

The meta-analytic method was developed across the twentieth century by Karl Pearson, Ronald Fisher, William Cochran, and — in the form most researchers use today — Gene Glass, who coined the term in 1976. Larry Hedges, Ingram Olkin, and later Julian Higgins refined the statistical machinery. The canonical contemporary reference is the Cochrane Handbook for Systematic Reviews of Interventions (Higgins et al., current edition), which governs meta-analyses of intervention effects across Cochrane reviews. For methodological depth, Borenstein, Hedges, Higgins, and Rothstein's Introduction to Meta-Analysis (2nd ed., 2021) is the standard textbook.

Almost every meta-analysis is embedded in a systematic review: you cannot pool effects without first running a PRISMA-compliant search, screen, and extraction. Reporting therefore follows PRISMA 2020 with meta-analysis-specific items, and where applicable the PRISMA-NMA extension for network meta-analyses.

When to use a meta-analysis

A meta-analysis is the right choice when:

  1. You have a focused effect question. "Does intervention A reduce outcome B compared with comparator C in population D?" is a meta-analytic question. "What is known about X?" is not.
  2. Multiple studies report comparable effect sizes. Pooling requires that studies measure the outcome on the same (or transformable) scale. If outcomes are reported inconsistently, meta-analysis may not be possible.
  3. Study designs are broadly similar. Pooling randomized trials with observational studies is legitimate only with careful subgroup or sensitivity analyses.
  4. Heterogeneity is moderate. With very high heterogeneity (I² > 75%), a single pooled estimate may mislead. Consider subgroup meta-analyses or narrative synthesis instead.
  5. You have the statistical capacity. Meta-analysis requires comfort with effect size metrics, weighting schemes, publication-bias diagnostics, and software — RevMan, R (metafor), Stata, or CMA. The team at statisticsforresearch.com provides detailed tutorials on effect sizes and heterogeneity.

Contrast with a systematic review without pooling (narrative synthesis of study-level findings), a scoping review (maps the landscape), an umbrella review (synthesizes prior meta-analyses), and a rapid review (which may include simplified meta-analyses under time pressure).

Step-by-step process

A meta-analysis extends the systematic review pipeline with quantitative analysis in Phase 4:

  1. Protocol with analysis plan. Register on PROSPERO. Specify the effect size metric (RR, OR, SMD, MD, HR), the model (fixed effect vs. random effects — usually random, via DerSimonian-Laird or REML), subgroup and sensitivity analyses, and publication-bias diagnostics.
  2. Exhaustive search strategy. As for a systematic review — MEDLINE, Embase, CINAHL, and topic-specific databases, plus trial registries (ClinicalTrials.gov, WHO ICTRP) to mitigate publication bias.
  3. Dual screening and risk-of-bias assessment. Screen in duplicate. Assess risk of bias with Cochrane RoB 2 (randomized) or ROBINS-I (non-randomized). Bias ratings feed into sensitivity analyses later.
  4. Quantitative data extraction. Extract effect size inputs in duplicate: means, standard deviations, sample sizes, event counts, or reported effect sizes with variances. When studies report incompletely, contact authors or use standard conversion formulas from the Cochrane Handbook.
  5. Synthesis and reporting. Pool using a random-effects model unless clinical and methodological homogeneity justify a fixed-effect model. Report the pooled estimate, 95% confidence interval, I², τ², prediction interval, forest plot, and funnel plot with Egger's test for publication bias. Run pre-specified subgroup and sensitivity analyses. Assess certainty with GRADE.

Reporting standards

Follow PRISMA 2020 with full reporting of the analytic decisions: effect metric, model, weighting scheme, heterogeneity estimator, handling of missing data, and software/version. The Cochrane Handbook chapters 9 and 10 are authoritative. For network meta-analyses, add PRISMA-NMA (Hutton et al., 2015). Publication-bias diagnostics (funnel plot, Egger's test, trim-and-fill) should be reported when ≥ 10 studies are pooled, per Cochrane guidance. See the reporting standards overview.

Common pitfalls

  • Pooling incomparable studies. "Apples and oranges" meta-analysis — combining studies with different populations, interventions, or outcomes — yields a number that has no clinical meaning. Prefer sub-grouping or narrative synthesis.
  • Ignoring heterogeneity. A pooled estimate with I² = 85% needs a prediction interval and a serious discussion of sources of variability, not a confident point estimate.
  • Publication bias unassessed. When fewer than 10 studies, formal tests are underpowered, but the risk remains. Search trial registries and grey literature.
  • Overfitting subgroup analyses. Every unplanned subgroup inflates type I error. Pre-specify subgroups in the protocol.
  • Black-box software reliance. RevMan or metafor will produce output regardless of whether the underlying data are sensible. Always cross-check extracted data and replicate at least one effect calculation by hand.

Tools & templates

Use the Data Extraction Form Template configured for quantitative outcomes (effect size, variance, sample size) and the Literature Review Matrix for study-level characteristics. The PRISMA Flow Diagram Template tracks inclusion. For analysis, RevMan (Cochrane), R (metafor, meta), and Stata (metan) are standard. All templates are available in the templates library.

Next steps

Meta-analysis is as much a data-management problem as a statistical one. The effect sizes are only as trustworthy as the extracted numbers feeding them, and extraction errors propagate straight into the forest plot. Lock your extraction matrix early, extract in duplicate, and audit a random 10% before you model. The Subthesis Literature Matrix gives you structured columns for means, standard deviations, sample sizes, and effect estimates — ready for export into RevMan or R.