Everyone Uses P-Values, But No One Knows What They Are

Let our journalists help you make sense of the noise: Subscribe to the Mother Jones Daily newsletter and get a recap of news that matters.

A blue-ribbon committee of the American Statistical Association spent a year arguing about what a p-value is, and finally coughed up the following explanation aimed at laymen:

Informally, a p-value is the probability under a specified statistical model that a statistical summary of the data (for example, the sample mean difference between two compared groups) would be equal to or more extreme than its observed value.

Raise your hand if you understood a word of that. Anyone?

Among experts, the long-running argument about p-values is abstruse and knotty. But even when you ignore the deep philosophical issues, it turns out that coming up with a close-enough explanation for average Joes and Janes is tough too. Over at 538, Christie Aschwanden hints at the problem: statisticians aren’t so much interested in explaining what a p-value is as they are in busting myths and explaining what it isn’t:

A common misconception among nonstatisticians is that p-values can tell you the probability that a result occurred by chance. This interpretation is dead wrong, but you see it again and again and again and again. The p-value only tells you something about the probability of seeing your results given a particular hypothetical explanation — it cannot tell you the probability that the results are true or whether they’re due to random chance. The ASA statement’s Principle No. 2: “P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.”

Personally, I’ve never been too happy with this approach, but I’ll leave that aside. I’m probably just wrong. But how about this nickel explanation?

If you’re testing a hypothesis with only a limited set of data (for example, proposing that someone is the leader of a presidential race by relying on a survey of only 1,000 people) a p-value is, informally, the probability that the small dataset validated your hypothesis merely by chance.

I suppose that’s wrong too in some kind of barely comprehensible way. It always is. But close! And, perhaps, reasonably comprehensible?


Headshot of Editor in Chief of Mother Jones, Clara Jeffery

It sure feels that way to me, and here at Mother Jones, we’ve been thinking a lot about what journalism needs to do differently, and how we can have the biggest impact.

We kept coming back to one word: corruption. Democracy and the rule of law being undermined by those with wealth and power for their own gain. So we're launching an ambitious Mother Jones Corruption Project to do deep, time-intensive reporting on systemic corruption, and asking the MoJo community to help crowdfund it.

We aim to hire, build a team, and give them the time and space needed to understand how we got here and how we might get out. We want to dig into the forces and decisions that have allowed massive conflicts of interest, influence peddling, and win-at-all-costs politics to flourish.

It's unlike anything we've done, and we have seed funding to get started, but we're looking to raise $500,000 from readers by July when we'll be making key budgeting decisions—and the more resources we have by then, the deeper we can dig. If our plan sounds good to you, please help kickstart it with a tax-deductible donation today.

Thanks for reading—whether or not you can pitch in today, or ever, I'm glad you're with us.

Signed by Clara Jeffery

Clara Jeffery, Editor-in-Chief

payment methods

We Recommend