Oct

4

A paper co-authored by Andrew Gelman who is a high-profile writer on statistics at Columbia:

Why we (usually) don’t have to worry about multiple comparisons*
Andrew Gelman, Jennifer Hill, Masanao Yajima
July 13, 2009

Abstract

Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise.

Multilevel models perform partial pooling (shifting estimates toward each other), whereas classical procedures typically keep the centers of intervals stationary, adjusting for multiple comparisons by making the intervals wider (or, equivalently, adjusting the p-values corresponding to intervals of fixed width). Thus, multilevel models address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low group-level variation, which is where multiple comparisons are a particular concern.

[ … ]

The Bonferroni correction directly targets the Type 1 error problem, but it does so at the expense of Type 2 error. By changing the p-value needed to reject the null (or equivalently widening the uncertainty intervals) the number of claims of rejected null hypotheses will indeed decrease on average. While this reduces the number of false rejections, it also increases the number of instances that the null is not rejected when in fact it should have been. Thus, the Bonferroni correction can severely reduce our power to detect an important effect.

Here is a widely-read blog Gelman co-authors.


Comments

Name

Email

Website

Speak your mind

Archives

Resources & Links

Search