From a
black box perspective, any
model may be viewed as a function
Y=
f(
X), where
X is a vector of
d uncertain model inputs {
X1,
X2, ...
Xd}, and
Y is a chosen
univariate model output (note that this approach examines scalar model outputs, but multiple outputs can be analysed by multiple independent sensitivity analyses). Furthermore, it will be assumed that the inputs are independently and uniformly distributed within the unit
hypercube, i.e. X_i \in [0,1] for i=1,2,...,d . This incurs no loss of generality because any input space can be transformed onto this unit hypercube.
f(
X) may be decomposed in the following way, : Y = f_0 + \sum_{i=1}^d f_i(X_i) + \sum_{i where
f0 is a constant and
fi is a function of
Xi,
fij a function of
Xi and
Xj, etc. A condition of this decomposition is that, : \int_0^1 f_{i_1 i_2 \dots i_s}(X_{i_1},X_{i_2},\dots,X_{i_s}) dX_{k}=0, \text{ for } k = i_1,...,i_s i.e. all the terms in the
functional decomposition are
orthogonal. This leads to definitions of the terms of the functional decomposition in terms of conditional expected values, : f_0 = E(Y) : f_i(X_i) = E(Y|X_i) - f_0 : f_{ij}(X_i,X_j) = E(Y|X_i,X_j) - f_0 - f_i - f_j From which it can be seen that
fi is the effect of varying
Xi alone (known as the
main effect of
Xi), and
fij is the effect of varying
Xi and
Xj simultaneously,
additional to the effect of their individual variations. This is known as a second-order
interaction. Higher-order terms have analogous definitions. Now, further assuming that the
f(
X) is
square-integrable, the functional decomposition may be squared and integrated to give, : \int f^2(\mathbf{X}) d\mathbf{X} - f_0^2 = \sum_{s=1}^d \sum_{i_1 Notice that the left hand side is equal to the variance of
Y, and the terms of the right hand side are variance terms, now decomposed with respect to sets of the
Xi. This finally leads to the decomposition of variance expression, : \operatorname{Var}(Y) = \sum_{i=1}^d V_i + \sum_{i where : V_{i} = \operatorname{Var}_{X_i} \left( E_{\textbf{X}_{\sim i}} (Y \mid X_{i}) \right) , : V_{ij} = \operatorname{Var}_{X_{ij}} \left( E_{\textbf{X}_{\sim ij}} \left( Y \mid X_i, X_j\right)\right) - V_{i} - V_{j} and so on. The
X~
i notation indicates the set of all variables
except Xi. The above variance decomposition shows how the variance of the model output can be decomposed into terms attributable to each input, as well as the interaction effects between them. Together, all terms sum to the total variance of the model output. ==First-order indices==