In Bayes linear statistics, the probability model is only partially specified, and it is not possible to calculate conditional probability by Bayes' rule. Instead Bayes linear suggests the calculation of an Adjusted Expectation. To conduct a Bayes linear analysis it is necessary to identify some values that you expect to know shortly by making measurements
D and some
future value which you would like to know
B. Here
D refers to a vector containing data and
B to a vector containing quantities you would like to predict. For the following example
B and
D are taken to be two-dimensional vectors i.e. :B = (Y_1,Y_2),~ D = (X_1,X_2). In order to specify a Bayes linear model it is necessary to supply expectations for the vectors
B and
D, and to also specify the correlation between each component of
B and each component of
D. For example the expectations are specified as: : E(Y_1)=5,~E(Y_2)=3,~E(X_1)=5,~E(X_2)=3 and the covariance matrix is specified as : : \begin{array}{c|cccc} & X_1 & X_2 & Y_1 & Y_2 \\ \hline X_1 & 1 & u & \gamma & \gamma \\ X_2 & u & 1 & \gamma & \gamma \\ Y_1 & \gamma & \gamma & 1 & v \\ Y_2 & \gamma & \gamma & v & 1 \\ \end{array}. The repetition in this matrix, has some interesting implications to be discussed shortly. An adjusted expectation is a linear estimator of the form : c_0 + c_1X_1 + c_2X_2 where c_0, c_1 and c_2 are chosen to minimise the prior expected loss for the observations i.e. Y_1, Y_2 in this case. That is for Y_1 : E([Y_1 - c_0 - c_1X_1 - c_2X_2]^2)\, where : c_0, c_1, c_2\, are chosen in order to minimise the prior expected loss in estimating Y_1 In general the adjusted expectation is calculated with : E_D(X) = \sum^k_{i=0} h_iD_i . Setting h_0, \dots, h_k to minimise : E\left(\left[X-\sum^k_{i=0}h_iD_i\right]^2\right). From a proof provided in (Goldstein and Wooff 2007) it can be shown that: : E_D(X) = E(X) + \mathrm{Cov}(X,D)\mathrm{Var}(D)^{-1}(D-E(D)) . \, For the case where is not invertible the
Moore–Penrose pseudoinverse should be used instead. Furthermore, the adjusted variance of the variable after observing the data is given by : \mathrm{Var}_D(X) = \mathrm{Var}(X) - \mathrm{Cov}(X,D)\mathrm{Var}(D)^{-1}\mathrm{Cov}(D,X). ==See also==