Consider a
linear regression of any form, for example :Y_t = \beta_1+ \beta_2 X_{t,1} + \beta_3 X_{t,2} + u_t \, where the errors might follow an AR(
p) autoregressive scheme, as follows: :u_t = \rho_1 u_{t-1} + \rho_2 u_{t-2} + \cdots + \rho_p u_{t-p} + \varepsilon_t. \, The simple regression model is first fitted by
ordinary least squares to obtain a set of sample residuals \hat{u}_t. Breusch and Godfrey proved that, if the following auxiliary regression model is fitted : \hat{u}_t = \alpha_0 + \alpha_1 X_{t,1} + \alpha_2 X_{t,2} + \rho_1 \hat{u}_{t-1} + \rho_2 \hat{u}_{t-2} + \cdots + \rho_p \hat{u}_{t-p} + \varepsilon_t \, and if the usual
Coefficient of determination (R^2 statistic) is calculated for this model: : R^2 := \frac{\sum_{j=1}^{T-p} (\hat{u}_{T-j} - \hat{u}_{T-j})^2}{\sum_{j=1}^{T-p} (\hat{u}_{T-j} - \bar{\hat{u}})^2} , where \bar{\hat{u}} stands for the
arithmetic mean of residuals. One may average residuals over the last n=T-p observations, where T is the number of observations in the original model and p is the number of error lags used in the auxiliary regression. There is a version of the test where missing residuals \hat{u}_{1-j} are replaced by zeros. In this version of the test the number of observations in the auxiliary regression n is equal to the original number of observations T . The following
asymptotic approximation can be used for the distribution of the test statistic: : n R^2\,\sim\,\chi^2_p, \, when the null hypothesis {H_0: \lbrace \rho_i = 0 \text{ for all } i \rbrace } holds (that is, there is no serial correlation of any order up to
p). ==Software==