Two types of cross-validation can be distinguished: exhaustive and non-exhaustive cross-validation.
Exhaustive cross-validation Exhaustive cross-validation methods are cross-validation methods which learn and test on all possible ways to divide the original sample into a training and a validation set.
Leave-p-out cross-validation Leave-
p-out cross-validation (
LpO CV) involves using
p observations as the validation set and the remaining observations as the training set. This is repeated on all ways to cut the original sample on a validation set of
p observations and a training set. LpO cross-validation require training and validating the model C^n_p times, where
n is the number of observations in the original sample, and where C^n_p is the
binomial coefficient. For
p > 1 and for even moderately large
n, LpO CV can become computationally infeasible. For example, with
n = 100 and
p = 30, C^{100}_{30} \approx 3\times 10^{25}. A variant of LpO cross-validation with p=2 known as leave-pair-out cross-validation has been recommended as a nearly unbiased method for estimating the area under
ROC curve of binary classifiers.
Leave-one-out cross-validation Leave-
one-out cross-validation (
LOOCV) is a particular case of leave-
p-out cross-validation with
p = 1. The process looks similar to
jackknife; however, with cross-validation one computes a statistic on the left-out sample(s), while with jackknifing one computes a statistic from the kept samples only. LOO cross-validation requires less computation time than LpO cross-validation because there are only C^n_1=n passes rather than C^n_p. However, n passes may still require quite a large computation time, in which case other approaches such as k-fold cross validation may be more appropriate.
Pseudo-code algorithm: Input: x, {vector of length N with x-values of incoming points} y, {vector of length N with y-values of the expected result} interpolate( x_in, y_in, x_out ), { returns the estimation for point x_out after the model is trained with x_in-y_in pairs}
Output: err, {estimate for the prediction error}
Steps: err ← 0 for i ← 1, ..., N do // define the cross-validation subsets x_in ← (x[1], ..., x[i − 1], x[i + 1], ..., x[N]) y_in ← (y[1], ..., y[i − 1], y[i + 1], ..., y[N]) x_out ← x[i] y_out ← interpolate(x_in, y_in, x_out) err ← err + (y[i] − y_out)^2 end for err ← err/N
Non-exhaustive cross-validation Non-exhaustive cross validation methods do not compute all ways of splitting the original sample. These methods are approximations of leave-
p-out cross-validation.
k-fold cross-validation In
k-fold cross-validation, the original sample is randomly partitioned into
k equal sized subsamples, often referred to as "folds". Of the
k subsamples, a single subsample is retained as the validation data for testing the model, and the remaining
k − 1 subsamples are used as training data. The cross-validation process is then repeated
k times, with each of the
k subsamples used exactly once as the validation data. The
k results can then be averaged to produce a single estimation. The advantage of this method over repeated random sub-sampling (see below) is that all observations are used for both training and validation, and each observation is used for validation exactly once. 10-fold cross-validation is commonly used, but in general
k remains an unfixed parameter. For example, setting
k =
2 results in 2-fold cross-validation. In 2-fold cross-validation, the dataset is randomly shuffled into two sets
d0 and
d1, so that both sets are equal size (this is usually implemented by shuffling the data array and then splitting it in two). We then train on
d0 and validate on
d1, followed by training on
d1 and validating on
d0. When
k =
n (the number of observations),
k-fold cross-validation is equivalent to leave-one-out cross-validation. In
stratified k-fold cross-validation, the partitions are selected so that the mean response value is approximately equal in all the partitions. In the case of binary classification, this means that each partition contains roughly the same proportions of the two types of class labels. In
repeated cross-validation the data is randomly split into
k partitions several times. The performance of the model can thereby be averaged over several runs, but this is rarely desirable in practice. When many different statistical or
machine learning models are being considered,
greedy k-fold cross-validation can be used to quickly identify the most promising candidate models.
Holdout method In the holdout method, we randomly assign data points to two sets
d0 and
d1, usually called the training set and the test set, respectively. The size of each of the sets is arbitrary although typically the test set is smaller than the training set. We then train (build a model) on
d0 and test (evaluate its performance) on
d1. In typical cross-validation, results of multiple runs of model-testing are averaged together; in contrast, the holdout method, in isolation, involves a single run. It should be used with caution because without such averaging of multiple runs, one may achieve highly misleading results. One's indicator of predictive accuracy (
F*) will tend to be unstable since it will not be smoothed out by multiple iterations (see below). Similarly, indicators of the specific role played by various predictor variables (e.g., values of regression coefficients) will tend to be unstable. While the holdout method can be framed as "the simplest kind of cross-validation", many sources instead classify holdout as a type of simple validation, rather than a simple or degenerate form of cross-validation.
Repeated random sub-sampling validation This method, also known as
Monte Carlo cross-validation, creates multiple random splits of the dataset into training and validation data. For each such split, the model is fit to the training data, and predictive accuracy is assessed using the validation data. The results are then averaged over the splits. The advantage of this method (over
k-fold cross validation) is that the proportion of the training/validation split is not dependent on the number of iterations (i.e., the number of partitions). The disadvantage of this method is that some observations may never be selected in the validation subsample, whereas others may be selected more than once. In other words, validation subsets may overlap. This method also exhibits
Monte Carlo variation, meaning that the results will vary if the analysis is repeated with different random splits. As the number of random splits approaches infinity, the result of repeated random sub-sampling validation tends towards that of leave-p-out cross-validation. In a stratified variant of this approach, the random samples are generated in such a way that the mean response value (i.e. the dependent variable in the regression) is equal in the training and testing sets. This is particularly useful if the responses are
dichotomous with an unbalanced representation of the two response values in the data. A method that applies repeated random sub-sampling is
RANSAC. ==Nested cross-validation==