Critical Reading

Site: EUPATI Open Classroom
Course: Interpretation and Dissemination of Results
Book: Critical Reading
Printed by: Guest user
Date: Friday, 19 April 2024, 11:19 AM

1. Introduction

(This section is organised in the form of a book, please follow the blue arrows to navigate through the book or by following the navigation panel on the right side of the page.)


It is estimated that every year, some 20,000 biomedical journals publish around six million articles supplemented by about 17,000 biomedical books. It is necessary for the reader to be able to critically interpret trial results and also evaluate the quality of the design of the clinical trials published in the scientific literature or elsewhere.

The three most common sources of errors in publication are:
  1. The risk of misuse and misinterpretation of statistical tests and their outcomes, due to the confusion about the meaning of the numbers (estimates) and the interpretation of hypothesis tests (p-values, power).

  2. Data dredging (data mining) or testing large numbers of hypotheses in a single data set in the search for a positive effect. When numerous hypotheses are tested, it is virtually certain that some will falsely appear statistically significant. This is because almost every data set with any degree of randomness is likely to contain some coincidental correlations. If they are not cautious, a researcher using data mining techniques can be easily misled by these apparently significant results.

  3. Bias. In research, bias occurs when systematic error is introduced into data sampling or hypothesis testing by selecting or encouraging one outcome or answer over others. Of note, bias is not always introduced intentionally, for example calibration error, or unknown confounding variables may introduce bias. Bias may affect the results of a clinical trial if the effect of interest deviates from its true value: estimates of association can be systematically larger or smaller than the true association. In extreme cases, bias can cause a perceived association which is directly opposite to the true association. Bias may also take the form of systematic favouritism in the way results are reported or in the way they are interpreted in the discussion and conclusion on clinical trial results.

2. How to Perform Critical Reading?

The reader must take into account relevant information from the best available sources. The reader should search the literature to identify relevant articles by using the available tools, e.g. Pubmed ( https://pubmed.ncbi.nlm.nih.gov/ or https://www.ncbi.nlm.nih.gov/pmc/ ) . The reader could also consider texts published by reputable organisations aiming to inform patients and lay people. However, the reader will have to critically appraise any publication for its quality and usefulness.

The reader should address the following questions:

Critical reading checklist

1. Is the trial relevant to the reader’s needs for information?
  • Are the objectives and the hypotheses clear?
  • Can the results of the trial be generalised to the broader population? The reader needs to consider to whom the results of the trial can be applied. The characteristics of the recruited population sample need to be described.
  • Are all treatments used in the trial clearly detailed, would the treatments under investigation be relevant to the reader?
  • What are the patients’ likely benefits and harms from the treatment?
  • Does a conflict of interest exist?
Consider whether the authenticity and the objectivity of the research can be relied upon.

Is the trial methodology appropriate to assess the stated hypothesis?

  • Is the control treatment a fair comparator that corresponds to current practice? Placebo, available therapy or best supportive care, historical control group?
  • The trial population should be clearly defined, as well as whether the whole population or a sub-set has been studied and whether there is any possible selection bias. Consider the relevance of any participants who have dropped out of the trial, the reasons for dropping out and the relevance for the results and conclusions of the research.
  • Was the control group well matched? Are the exclusion criteria valid?
  • Are the trial endpoints well defined and meaningful?
  • Is it clear how the trial was powered for the primary endpoint?
  • Was the trial long enough for the outcome measure to occur and for capturing enough events?
Are the results convincing?

  • The results should be clearly and objectively presented in sufficient detail.
For example, results should be broken down. What about the statistics used – are they appropriate? Are there any alternative explanations for the results?

  • Identify the rate of loss to follow-up and how non-responders have been dealt with.
For example, have they been considered as treatment failures or included separately in the analysis?

  • Check for any bias. Consider the possibility of any confounding variables. 
For example, age, social class, smoking, disease duration, co-morbidity. Assess whether the researchers controlled or reduced this risk.

Is the discussion section convincing?

  • The discussion should include all the results of the trial and not just those that have supported the initial hypothesis.
  • Have the initial objectives been met, and has the research question been answered?
  • Have the authors taken into account possible bias and acknowledged the possible limitations of the trial?
  • Check whether any incorrect generalisations have been made by inappropriately applying the trial results to a different type of population.
  • Check whether the discussion fits with existing knowledge and opinion (always look for other publications on the same topic).
Is the demonstrated effect clinically significant?

Critically assess if claimed effects are clinically relevant, i.e. do they have a significant effect on the health of a patient? For instance a statistically significant effect may be of such low magnitude that it is not clinically relevant for the patient. Of note, the larger the size of the trial, the smaller the magnitude of the effect that can be detected. As such, a statistically significant but non-clinically relevant effect could be the result of an over-sized or over-powered clinical trial.

On the other hand, the absence of evidence does not mean evidence of absence of any effect. Indeed, when a statistically significant difference is not found between the trial arms, it does not mean that the compared treatments are equivalent. The statistical test measures the strength of evidence against the null hypothesis of no difference, not the evidence for it. Even if the efficacy of treatments truly differ, a statistical test may be non-significant due to chance (type II error), or because of an insufficient amount of available information (small trial size, lack of power).

Are the conclusions valid?

The conclusions provided by the author should be supported by the available data. Check that the conclusions relate to the stated aims and objectives of the trial.

For all of the above questions it is of importance that the reader be sufficiently knowledgeable and equipped to critically appraise and review the data to avoid drawing erroneous conclusions. Knowledge of the (basics of the) R&D process and methodology (including statistics) as well as of legislative requirements is of paramount importance.

3. Further Reading

  • Greenhalgh T; How to read a paper. Getting your bearings (deciding what the paper is about). BMJ. 1997 Jul 26;315(7102):243-6.

  • Greenhalgh T; Assessing the methodological quality of published papers. BMJ. 1997 Aug 2;315(7103):305-8.

  • Ioannidis J, Contradicted and Initially Stronger Effects in Highly Cited Clinical Research, JAMA, July 2005-Vol 294, No. 2, 218-228

  • Montori VM et al. User’s guide to detecting misleading claims in clinical research reports. BMJ 329:1093-1096, 2004
  • Schulz KF, Altman DG, Moher D, for the CONSORT Group, CONSORT 2010 Statement: Updated Guidelines for Reporting Parallel Group Randomised Trials. PLoS Medicine Vol. 7, Issue 3, 2010

  • Straus SE, Richardson WS, Glasziou P, Haynes RB. Evidence-based Medicine. How to Practice and Teach EBM (3rd ed). Edinburgh: Elsevier Churchill Livingstone, 2005.