5. Summing it all up: Synthesis of clinical research

Site: EUPATI Open Classroom
Course: HTA and Evaluation Methods: Quantitative
Book: 5. Summing it all up: Synthesis of clinical research
Printed by: Guest user
Date: Wednesday, 24 April 2024, 9:06 AM

1. Summing it all up: Synthesis of clinical research

(This section is organised in the form of a book, please follow the blue arrows to navigate through the book or by following the navigation panel on the right side of the page.)

The clinical effectiveness assessment at the core of HTA relies on the collection and synthesis of data. It is important for decision-makers that all relevant clinical evidence is identified in a transparent fashion. To this end, the following methods are used to carefully gather, synthesise, and assess the implications of all relevant clinical evidence: 


1.1. Systematic reviews

A systematic review is a thorough, comprehensive, and explicit way of interrogating the medical literature. It typically involves several steps, including:

1.     asking an answerable question (often the most difficult step),

2.     identifying one or more databases to search,

3.     developing an explicit search strategy,

4.     selecting titles, abstracts, and manuscripts based on explicit inclusion and exclusion criteria, and

5.     abstracting data in a standardised format. 

Systematic reviews are key inputs to the HTA process and driven by reproducible methods, which provide a structured, comprehensive overview of the published scientific evidence. Decision-makers may still differ – potentially quite broadly – in how they use the results of the systematic review or what weight they give to results in relation to other factors. However, the systematic nature of these reviews gives the decision makers the best chance of ensuring that the role scientific information may play in their decision is maximised.

Systematic reviews can still result in a biased estimate of clinical impact if information from important studies is not identified, or if information regarding important outcomes from published studies is not reported in a review. Also, the quality of the information must be considered before authors draw conclusions. Identifying that 15 out of 20 studies reported positive outcomes is not helpful if those studies were prone to error by design. As such, systematic reviews should be critically assessed.

When interpreting a systematic review or meta-analysis, the reader or an HTA body cannot always rely on available evidence and sometimes have to make a 'leap of faith': a smaller leap for systematic reviews that have reported appropriate approaches to the items in the checklist, and a bigger leap for those with less or no detail.


1.2. Meta-Analysis

To estimate the clinical and economic impact of a new medicine, it is helpful for a proper HTA to combine different pieces of information once they are identified. Meta-analysis includes data from several studies about a particular outcome or combines estimates of effects from several studies. By doing so, it produces more valid results than a single study because it reduces random error through increased sample size. It also allows to explore heterogeneity in results, such as differences that may occur due to trials being conducted using different methods.

Some HTA bodies ask the MAH to provide meta-analysis, as MAH have access to more data. HTA bodies who do not receive individual patient-level meta-analysis may need to synthesise such information from study reports using already analysed aggregate or population-level data. Patient-level data are very powerful, as they provide more opportunity for interrogation and a better understanding of the effect of a medicine in specific subgroups.

Meta-analyses are often used to combine the results of randomised controlled trials based on the assumption that these data share an underlying relationship that allows them to be combined mathematically. However, it becomes more difficult to perform a meta-analysis when combining the results of randomised trials with those from other types of study designs, such as non-experimental studies. In some cases, a meta-analysis is entirely inappropriate. Cost-effectiveness analyses, for example, may include studies that do not share a consistent structure and can’t be combined through meta-analysis.

Meta-analysis can be useful when comparing outcomes or treatments for which multiple studies are available. However, what if none of the studies identified actually compare the two alternatives that a decision maker is interested in comparing? Or, what if a decision maker is interested in understanding the best of many competing alternatives?

New methods called ‘indirect’ and ‘mixed treatment’ comparisons have been developed to provide information about how different interventions compare, even when no study comparing the treatment alternatives directly is available. For example, if a decision maker has interest in how new medicines A and B compare, and has only randomised trials comparing A with placebo and B with placebo, these methods can be employed to understand how A compares with B.

Mixed treatment methods can also strengthen our knowledge of how A compares with B by combining studies that directly compare these with studies that give an indirect comparison.

Report checklists for systematic reviews and meta-analysis have been developed to help readers interpret the findings of the analysis. Most recently, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement was devised to guide analysts conducting reviews toward proper reporting (3). It consists of 27 items to be addressed when reporting a systematic review. It encourages the use of a flow chart, so that readers can understand the process that led to the studies’ selection. This helps users of systematic reviews to understand when and where the information came from, and how it was synthesised.


1.3. Modelling

Systematic reviews coupled with meta-analysis can provide more precise estimates of the relative impact of various interventions. But how can this information be used to support decision - making? Simulation modelling is a technique that allows the combination of different types of information (clinical, epidemiological, economical, etc.) for an overall picture of the relative costs and effectiveness of medical treatment.

Simulation models are not constrained to a single type of information or a single outcome. These models are useful, particularly when a lengthy wait for more evidence is not feasible. At their core, simulation models combine probabilities to obtain estimates of expected values. These values can be either clinical outcomes, or costs, making simulation modelling useful for economic evaluation.

These models require assumptions to be inserted regarding various outcomes, which ultimately influence the endpoints calculated by a model. The accuracy and reliability of these assumptions will vary according to the treatment in question and the quantity and quality of evidence that supports the selection of the value inserted for each ‘assumption’. Many HTA guidelines provide specific guidance on selecting and supporting those model ‘inputs’ to assist reviewers in making judgements regarding the overall quality and ‘probability’ of the model’s predictions.


2. Conclusion

Synthesising information is a key feature of clinical effectiveness assessment and HTA. Tools for synthesis, systematic reviews, meta-analysis, and simulation modelling are essential components of this work. In evaluating the potential gains from meta-analysis, attention to the decision-making context is essential, as well as a clear recognition of the limitations of these tools.