Various interpretations of a confidence interval can be given (taking the 95% confidence interval as an example in the following). • The confidence interval can be expressed in terms of a
long-run frequency in
repeated samples (or in
resampling): "Were this procedure to be repeated on numerous samples, the proportion of calculated 95% confidence intervals that encompassed the true value of the population parameter would tend toward 95%."
Common misunderstandings Confidence intervals and levels are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them. Contrary to common misconceptions, a 95% confidence level does
not mean that: • for a given realized interval there is a 95% probability that the population parameter lies within the interval; For example, suppose a factory produces metal rods, and a random sample of 25 rods gives a 95% confidence interval of 36.8 to 39.0 mm for the population mean length. • It is incorrect to say that there is a 95% probability that the true population mean lies within this interval: the true mean is fixed, not random. The true mean could be 37 mm, which is within the confidence interval, or 40 mm, which is not; in any case, whether it falls between 36.8 and 39.0 mm is a matter of fact, not probability. • It is not necessarily true that the lengths of 95% of the sampled rods lie within this interval. In this case, it cannot be true: 95% of 25 is not an integer. • It is not generally true that there is a 95% probability that the sample mean length (an estimate of the population mean length) in a second sample would fall within this interval. In fact, if the true mean length is far from this specific confidence interval, it could be very unlikely that the next sample mean falls within the interval. Instead, the 95% confidence level means that if we took 100 such samples, we would expect the true population mean to lie within approximately 95 of the calculated intervals. confidence intervals coincide with
credible intervals under non-informative priors. In such cases, common misconceptions about confidence intervals (e.g. interpreting them as probability statements about the parameter) may yield practically correct conclusions.
Examples of how naïve interpretation of confidence intervals can be problematic Confidence procedure for uniform location Welch presented an example which clearly shows the difference between the theory of confidence intervals and other theories of interval estimation (including Fisher's
fiducial intervals and objective
Bayesian intervals). Robinson called this example "[p]ossibly the best known counterexample for Neyman's version of confidence interval theory." To Welch, it showed the superiority of confidence interval theory; to critics of the theory, it shows a deficiency. Here we present a simplified version. Suppose that X_1,X_2 are independent observations from a
uniform (\theta - 1/2, \theta + 1/2) distribution. Then the optimal 50% confidence procedure for \theta is \bar{X} \pm \begin{cases} \dfrac{2} & \text{if } |X_1-X_2| A fiducial or objective Bayesian argument can be used to derive the interval estimate \bar{X} \pm \frac{1-|X_1-X_2|}{4}, which is also a 50% confidence procedure. Welch showed that the first confidence procedure dominates the second, according to desiderata from confidence interval theory; for every \theta_1\neq\theta, the probability that the first procedure contains \theta_1 is
less than or equal to the probability that the second procedure contains \theta_1. The average width of the intervals from the first procedure is less than that of the second. Hence, the first procedure is preferred under classical confidence interval theory. However, when |X_1-X_2| \geq 1/2, intervals from the first procedure are
guaranteed to contain the true value \theta: Therefore, the nominal 50% confidence coefficient is unrelated to the uncertainty we should have that a specific interval contains the true value. The second procedure does not have this property. Moreover, when the first procedure generates a very short interval, this indicates that X_1,X_2 are very close together and hence only offer the information in a single data point. Yet the first interval will exclude almost all reasonable values of the parameter due to its short width. The second procedure does not have this property. The two counter-intuitive properties of the first procedure – 100%
coverage when X_1,X_2 are far apart and almost 0% coverage when X_1,X_2 are close together – balance out to yield 50% coverage on average. However, despite the first procedure being optimal, its intervals offer neither an assessment of the precision of the estimate nor an assessment of the uncertainty one should have that the interval contains the true value. This example is used to argue against naïve interpretations of confidence intervals. If a confidence procedure is asserted to have properties beyond that of the nominal coverage (such as relation to precision, or a relationship with Bayesian inference), those properties must be proved; they do not follow from the fact that a procedure is a confidence procedure.
Confidence procedure for ω2 Steiger suggested a number of confidence procedures for common
effect size measures in
ANOVA. Morey et al. point out that several of these confidence procedures, including the one for
ω2, have the property that as the
F statistic becomes increasingly small—indicating misfit with all possible values of
ω2—the confidence interval shrinks and can even contain only the single value
ω2 = 0; that is, the CI is infinitesimally narrow (this occurs when p\geq1-\alpha/2 for a 100(1-\alpha)\% CI). This behavior is consistent with the relationship between the confidence procedure and
significance testing: as
F becomes so small that the group means are much closer together than we would expect by chance, a significance test might indicate rejection for most or all values of
ω2. Hence the interval will be very narrow or even empty (or, by a convention suggested by Steiger, containing only 0). However, this does
not indicate that the estimate of
ω2 is very precise. In a sense, it indicates the opposite: that the trustworthiness of the results themselves may be in doubt. This is contrary to the common interpretation of confidence intervals that they reveal the precision of the estimate. == History ==