It is estimated that every year, some 20,000 biomedical journals publish around six million articles supplemented by about 17,000 biomedical books. It is necessary for the reader to be able to critically interpret trial results and also evaluate the quality of the design of the clinical trials published in the scientific literature or elsewhere.
The three most common sources of errors in publication are:
- The risk of misuse and misinterpretation of statistical tests and their outcomes, due to the confusion about the meaning of the numbers (estimates) and the interpretation of hypothesis tests (p-values, power).
- Data dredging (data mining) or testing large numbers of hypotheses in a single data set in the search for a positive effect. When numerous hypotheses are tested, it is virtually certain that some will falsely appear statistically significant. This is because almost every data set with any degree of randomness is likely to contain some coincidental correlations. If they are not cautious, a researcher using data mining techniques can be easily misled by these apparently significant results.
- Bias. In research, bias occurs when systematic error is introduced into data sampling or hypothesis testing by selecting or encouraging one outcome or answer over others. Of note, bias is not always introduced intentionally, for example calibration error, or unknown confounding variables may introduce bias. Bias may affect the results of a clinical trial if the effect of interest deviates from its true value: estimates of association can be systematically larger or smaller than the true association. In extreme cases, bias can cause a perceived association which is directly opposite to the true association. Bias may also take the form of systematic favouritism in the way results are reported or in the way they are interpreted in the discussion and conclusion on clinical trial results.