Statistical significance is a fundamental aspect of hypothesis testing. In any experiment using a sample from a population (for instance, a sample of patients with a particular disease) there is the possibility that an observed effect may be due to differences between the sample and the whole population (sampling error) rather than the medicine under study. A test result is called statistically significant if it has been predicted as unlikely to have occurred by sampling error alone, according to a threshold probability: the significance level.
Statistical significance does not imply importance or practical significance. For example, the term clinical significance refers to the practical importance of a treatment effect. Researchers focusing solely on whether their results are statistically significant might report findings that are not relevant in practice. It is always prudent to report an effect size along with p-values. An effect size measure quantifies the strength of an effect, and makes it easier to draw conclusions on the practical implications.