mike_hay's blog

Blogger picture
Posted on June 28, 2018 - 5:09pm, by mike_hay

In the last post, I described some common misconceptions and problems with the use of null-hypothesis significance tests and P-values. In this post, I'll show more common ways that P-values are often misapplied, including how a significant result under $\alpha = 0.05$ can have more than a 50% chance of being wrong.

Base-rate fallacy

Null-hypothesis significance testing often underestimates the chance that a significant result is invalid because it is extremely difficult to take into account the prior probability of the alternative hypothesis being true. Ignoring prior information in this way is called the base-rate fallacy.

Blogger picture
Posted on June 19, 2018 - 2:58pm, by mike_hay

Calling everything with p < 0.05 "significant" is just plain wrong

The practice of statistics in the sciences often takes the form of drawing scientific conclusions from cargo-cult application of inappropriate or outdated statistical methods, often to the exclusion of prior evidence or plausibility. This has serious consequences for reproducibility and reliability of scientific results. Perhaps the number one issue is the over-reliance and lack of understanding of null-hypothesis significance testing, and blind faith in the reliability of the P-values these tests provide.

Subscribe to RSS - mike_hay's blog