Hide

The Blight of the Type II Error: When No Difference Does Not Mean No Difference

Abstract

Much focus in research has been given to minimizing type I errors, where we incorrectly conclude that there is a difference between 2 treatments or populations. In contrast, our standard scientific method and power analysis allows for a much greater rate of type II errors, in which we fail to show a difference when, in fact, one exists (≥20% rate of type II errors vs ≤5% rate of type I errors). Additional factors that can cause type II errors may propel their prevalence to well in excess of 20%. Failure to reject the null hypothesis may be a tolerable outcome in a certain proportion of studies. However, type II errors may become dangerous when the conclusions of a study overreach, incorrectly stating that there is no difference, when, in fact, a difference exists. Type II errors resulting in overreaching conclusions may impede incremental advances in our field, as the advantages of small improvements may go undetected. To avert this danger in studies that fail to meet statistical significance, we as researchers (20% or more, vs 5% for type I errors) be precise in our conclusions stating simply that the null hypothesis could not be rejected.

Read More