Content area
Full Text
When more than one statistical test is performed in analysing the data from a clinical study, some statisticians and journal editors demand that a more stringent criterion be used for "statistical significance" than the conventional P<0.05. 1 Many well meaning researchers, eager for methodological rigour, comply without fully grasping what is at stake. Recently, adjustments for multiple tests (or Bonferroni adjustments) have found their way into introductory texts on medical statistics, which has increased their apparent legitimacy. This paper advances the view, widely held by epidemiologists, that Bonferroni adjustments are, at best, unnecessary and, at worst, deleterious to sound statistical inference.
Summary points
Adjusting statistical significance for the number of tests that have been performed on study data-the Bonferroni method-creates more problems than it solves
The Bonferroni method is concerned with the general null hypothesis (that all null hypotheses are true simultaneously), which is rarely of interest or use to researchers
The main weakness is that the interpretation of a finding depends on the number of other tests performed
The likelihood of type II errors is also increased, so that truly important differences are deemed non-significant
Simply describing what tests of significance have been performed, and why, is generally the best way of dealing with multiple comparisons
Adjustment for multiple tests
Bonferroni adjustments are based on the following reasoning. 1-3 If a null hypothesis is true (for instance, two treatment groups in a randomised trial do not differ in terms of cure rates), a significant difference (P<0.05) will be observed by chance once in 20 trials. This is the type I error, or α. When 20 independent tests are performed (for example, study groups are compared with regard to 20 unrelated variables) and the null hypothesis holds for all 20 comparisons, the chance of at least one test being significant is no longer 0.05, but 0.64. The formula for the error rate across the study is 1-(1-α)n , where n is the number of tests performed. However, the Bonferroni adjustment deflates the α applied to each, so the study-wide error rate remains at 0.05. The adjusted significance level is 1-(1-α)1/n (in this case 0.00256), often approximated by α/n (here 0.0025). What is wrong with this statistical approach?
Problems
Irrelevant null hypothesis
The first...