1. Summing it all up: Synthesis of clinical research

1.2. Meta-Analysis

To estimate the clinical and economic impact of a new medicine, it is helpful for a proper HTA to combine different pieces of information once they are identified. Meta-analysis includes data from several studies about a particular outcome or combines estimates of effects from several studies. By doing so, it produces more valid results than a single study because it reduces random error through increased sample size. It also allows to explore heterogeneity in results, such as differences that may occur due to trials being conducted using different methods.

Some HTA bodies ask the MAH to provide meta-analysis, as MAH have access to more data. HTA bodies who do not receive individual patient-level meta-analysis may need to synthesise such information from study reports using already analysed aggregate or population-level data. Patient-level data are very powerful, as they provide more opportunity for interrogation and a better understanding of the effect of a medicine in specific subgroups.

Meta-analyses are often used to combine the results of randomised controlled trials based on the assumption that these data share an underlying relationship that allows them to be combined mathematically. However, it becomes more difficult to perform a meta-analysis when combining the results of randomised trials with those from other types of study designs, such as non-experimental studies. In some cases, a meta-analysis is entirely inappropriate. Cost-effectiveness analyses, for example, may include studies that do not share a consistent structure and can’t be combined through meta-analysis.

Meta-analysis can be useful when comparing outcomes or treatments for which multiple studies are available. However, what if none of the studies identified actually compare the two alternatives that a decision maker is interested in comparing? Or, what if a decision maker is interested in understanding the best of many competing alternatives?

New methods called ‘indirect’ and ‘mixed treatment’ comparisons have been developed to provide information about how different interventions compare, even when no study comparing the treatment alternatives directly is available. For example, if a decision maker has interest in how new medicines A and B compare, and has only randomised trials comparing A with placebo and B with placebo, these methods can be employed to understand how A compares with B.

Mixed treatment methods can also strengthen our knowledge of how A compares with B by combining studies that directly compare these with studies that give an indirect comparison.

Report checklists for systematic reviews and meta-analysis have been developed to help readers interpret the findings of the analysis. Most recently, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement was devised to guide analysts conducting reviews toward proper reporting (3). It consists of 27 items to be addressed when reporting a systematic review. It encourages the use of a flow chart, so that readers can understand the process that led to the studies’ selection. This helps users of systematic reviews to understand when and where the information came from, and how it was synthesised.