A better formalism for interpreting confidence intervals
When we take a sample mean, we should think of it as a random variable, and our measured sample mean as a realisation of that random variable. The sample mean is a random variable because it is the result of random sampling. Repeated sampling involves observing repeated realisations of the random variable.
We should think of confidence intervals around this mean as realisations of a random interval, an interval whose bounds are random variables rather than real numbers. This is an attractive formalism because it resolves many confusions around the interpretation of confidence intervals.
Suppose the true population mean is the number . The mean of a random sample from this population is the random variable . Then the random interval
has an approximately 95% probability of containing .
Suppose in our sample takes the realisation and takes the realisation . So an instance of the above random interval is the confidence interval:
The confidence interval either contains or does not contain .
In full, my proposed interpretation schema is:
,
is a realisation of
,
and the probability
.
This formalism has several advantages:
- robustness: distinguishing random intervals from confidence intervals means it’s much harder to get confused into making an incorrect probabilistic statement about the non-probabilistic object .
- parsiomy: we express everything we want using only probabilities, random variables, and intervals, three well-understood notions.
- relevance: our interpretation only involves the objects we actually have (a random interval and a confidence interval). We need not make reference to (hypothetical) repeated sampling.
The ugly and the bad
Unfortunately, my preferred formalism does not appear to be popular. Let me show some of the alternatives I have seen and explain their downsides and how my proposal does better.
1
Oxford department of statistics:
The interval is random, not the parameter. Thus, we talk of the probability of the interval containing the parameter, not the probability of the parameter lying in the interval.
This is the worst example, and is admittedly rarely seen in print. But in speech I’ve seen it used often, even by academics who were trying to explain the correct interpretation of confidence intervals! The problem with this of course is that once you write it down in mathematical language, the probability of the interval containing the parameter is exactly the same object as the probability of the parameter lying in the interval. In our example it is simply . It is equal to 1 or 0.
2
Quantitative Economics lecture notes for Oxford undergraduates:
“Were this procedure to be repeated on multiple samples, the calculated confidence interval (which would differ for each sample) would encompass the true population parameter 95% of the time.”
I don’t like this because:
- It invokes the clunky counterfactual “were this procedure to be repeated”. What if it’s impossible to take repeated samples? We still want to be able to make statements about our confidence interval.
- It doesn’t have a clear mathematical formalisation. how do I write “95% of the time” in terms of probabilities?
- The actual confidence interval we have is nowhere mentioned. For what is supposed to be an interpretation of that object, that’s a little confusing.
My formalism solves these three problems.
3
Wikipedia:
“There is a 90% probability that the calculated confidence interval from some future experiment encompasses the true value of the population parameter.”
Similar complaint here: why do we need to refer to future experiments? We want an interpretation of the confidence interval we actually have.
4
For this reason, for a 95% CI, we say that we have 95% confidence that the interval will cover the true population mean. We use the term ‘confidence’ instead of probability because although the sample mean is random, the single interval we calculate is fixed. We also cannot talk about the probability that the population mean will lie within a certain interval, since it is also fixed.
This needlessly introduces the new concept of ‘confidence’, which is bound to cause confusion. It’s much better to use probabilities, a concept we already understand and for which we have a formal notation.