Different schools of statistical inference have become established. These schools—or "paradigms"—are not mutually exclusive, and methods that work well under one paradigm often have attractive interpretations under other paradigms. Bandyopadhyay and Forster describe four paradigms: The classical (or
frequentist) paradigm, the
Bayesian paradigm, the
likelihoodist paradigm, and the
Akaikean-Information Criterion-based paradigm.
Frequentist inference This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging.
Examples of frequentist inference •
p-value •
Confidence interval •
Null hypothesis significance testing
Frequentist inference, objectivity, and decision theory One interpretation of
frequentist inference (or classical inference) is that it is applicable only in terms of
frequency probability; that is, in terms of repeated sampling from a population. However, the approach of Neyman develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned on unknown parameters) probabilities used in the frequentist approach. The frequentist procedures of significance testing and confidence intervals can be constructed without regard to
utility functions. However, some elements of frequentist statistics, such as
statistical decision theory, do incorporate
utility functions. In particular, frequentist developments of optimal inference (such as
minimum-variance unbiased estimators, or
uniformly most powerful testing) make use of
loss functions, which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property. However, loss-functions are often useful for stating optimality properties: for example, median-unbiased estimators are optimal under
absolute value loss functions, in that they minimize expected loss, and
least squares estimators are optimal under squared error loss functions, in that they minimize expected loss. While statisticians using frequentist inference must choose for themselves the parameters of interest, and the
estimators/
test statistic to be used, the absence of obviously explicit utilities and prior distributions has helped frequentist procedures to become widely viewed as 'objective'.
Bayesian inference The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate into one, and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions. There are
several different justifications for using the Bayesian approach.
Examples of Bayesian inference •
Credible interval for
interval estimation •
Bayes factors for model comparison
Bayesian inference, subjectivity and decision theory Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior. For example, the posterior mean, median and mode, highest posterior density intervals, and Bayes Factors can all be motivated in this way. While a user's
utility function need not be stated for this sort of inference, these summaries do all depend (to some extent) on stated prior beliefs, and are generally viewed as subjective conclusions. (Methods of prior construction which do not require external input have been
proposed but not yet fully developed.) Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically provides
optimal decisions in a
decision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation. Analyses which are not formally Bayesian can be (logically)
incoherent; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to be
coherent. Some advocates of
Bayesian inference assert that inference
must take place in this decision-theoretic framework, and that
Bayesian inference should not conclude with the evaluation and summarization of posterior beliefs.
Likelihood-based inference Likelihood-based inference is a paradigm used to estimate the parameters of a statistical model based on observed data.
Likelihoodism approaches statistics by using the
likelihood function, denoted as L(x | \theta), quantifies the probability of observing the given data x, assuming a specific set of parameter values \theta. In likelihood-based inference, the goal is to find the set of parameter values that maximizes the likelihood function, or equivalently, maximizes the probability of observing the given data. The process of likelihood-based inference usually involves the following steps: • Formulating the statistical model: A statistical model is defined based on the problem at hand, specifying the distributional assumptions and the relationship between the observed data and the unknown parameters. The model can be simple, such as a normal distribution with known variance, or complex, such as a hierarchical model with multiple levels of random effects. • Constructing the likelihood function: Given the statistical model, the likelihood function is constructed by evaluating the joint probability density or mass function of the observed data as a function of the unknown parameters. This function represents the probability of observing the data for different values of the parameters. • Maximizing the likelihood function: The next step is to find the set of parameter values that maximizes the likelihood function. This can be achieved using optimization techniques such as numerical optimization algorithms. The estimated parameter values, often denoted as \bar{y}, are the
maximum likelihood estimates (MLEs). • Assessing uncertainty: Once the MLEs are obtained, it is crucial to quantify the uncertainty associated with the parameter estimates. This can be done by calculating
standard errors, confidence intervals, or conducting
hypothesis tests based on asymptotic theory or simulation techniques such as
bootstrapping. • Model checking: After obtaining the parameter estimates and assessing their uncertainty, it is important to assess the adequacy of the statistical model. This involves checking the assumptions made in the model and evaluating the fit of the model to the data using goodness-of-fit tests, residual analysis, or graphical diagnostics. • Inference and interpretation: Finally, based on the estimated parameters and model assessment, statistical inference can be performed. This involves drawing conclusions about the population parameters, making predictions, or testing hypotheses based on the estimated model.
AIC-based inference The
Akaike information criterion (AIC) is an
estimator of the relative quality of
statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means for
model selection. AIC is founded on
information theory: it offers an estimate of the relative information lost when a given model is used to represent the process that generated the data. (In doing so, it deals with the trade-off between the
goodness of fit of the model and the simplicity of the model.)
Other paradigms for inference Minimum description length The minimum description length (MDL) principle has been developed from ideas in
information theory and the theory of
Kolmogorov complexity. The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or
probability models for the data, as might be done in frequentist or Bayesian approaches. However, if a "data generating mechanism" does exist in reality, then according to
Shannon's
source coding theorem it provides the MDL description of the data, on average and asymptotically. In minimizing description length (or descriptive complexity), MDL estimation is similar to
maximum likelihood estimation and
maximum a posteriori estimation (using
maximum-entropy Bayesian priors). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling. The MDL principle has been applied in communication-
coding theory in
information theory, in
linear regression,
Fiducial inference Fiducial inference was an approach to statistical inference based on
fiducial probability, also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious. However this argument is the same as that which shows that a so-called
confidence distribution is not a valid
probability distribution and, since this has not invalidated the application of
confidence intervals, it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher's
fiducial argument as a special case of an inference theory using
upper and lower probabilities.
Structural inference Developing ideas of Fisher and of Pitman from 1938 to 1939,
George A. Barnard developed "structural inference" or "pivotal inference", an approach using
invariant probabilities on
group families. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful.
Donald A. S. Fraser developed a general theory for structural inference based on
group theory and applied this to linear models. The theory formulated by Fraser has close links to decision theory and Bayesian statistics and can provide optimal frequentist decision rules if they exist. ==Inference topics==