In most treatments of OLS, the regressors (parameters of interest) in the
design matrix \mathbf{X} are assumed to be fixed in repeated samples. This assumption is considered inappropriate for a predominantly nonexperimental science like
econometrics. Instead, the assumptions of the Gauss–Markov theorem are stated conditional on \mathbf{X}.
Linearity The dependent variable is assumed to be a linear function of the variables specified in the model. The specification must be linear in its parameters. This does not mean that there must be a linear relationship between the independent and dependent variables. The independent variables can take non-linear forms as long as the parameters are linear. The equation y = \beta_{0} + \beta_{1} x^2, qualifies as linear while y = \beta_{0} + \beta_{1}^2 x can be transformed to be linear by replacing \beta_{1}^2 by another parameter, say \gamma. An equation with a parameter dependent on an independent variable does not qualify as linear, for example y = \beta_{0} + \beta_{1}(x) \cdot x, where \beta_{1}(x) is a function of x.
Data transformations are often used to convert an equation into a linear form. For example, the
Cobb–Douglas function—often used in economics—is nonlinear: :Y = A L^\alpha K^{1 - \alpha} e^\varepsilon But it can be expressed in linear form by taking the
natural logarithm of both sides: : \ln Y=\ln A + \alpha \ln L + (1 - \alpha) \ln K + \varepsilon = \beta_0 + \beta_1 \ln L + \beta_2 \ln K + \varepsilon This assumption also covers specification issues: assuming that the proper functional form has been selected and there are no
omitted variables. One should be aware, however, that the parameters that minimize the residuals of the transformed equation do not necessarily minimize the residuals of the original equation.
Strict exogeneity For all n observations, the expectation—conditional on the regressors—of the error term is zero: :\operatorname{E}[\,\varepsilon_{i}\mid \mathbf{X} ] = \operatorname{E}[\,\varepsilon_{i}\mid \mathbf{x}_{1}, \dots, \mathbf{x}_{n} ] = 0. where \mathbf{x}_i = \begin{bmatrix} x_{i1} & x_{i2} & \cdots & x_{ik} \end{bmatrix}^{\operatorname{T}} is the data vector of regressors for the
ith observation, and consequently \mathbf{X} = \begin{bmatrix} \mathbf{x}_{1}^{\operatorname{T}} & \mathbf{x}_{2}^{\operatorname{T}} & \cdots & \mathbf{x}_{n}^{\operatorname{T}} \end{bmatrix}^{\operatorname{T}} is the data matrix or design matrix. Geometrically, this assumption implies that \mathbf{x}_{i} and \varepsilon_{i} are
orthogonal to each other, so that their
inner product (i.e., their cross moment) is zero. :\operatorname{E}[\,\mathbf{x}_{j} \cdot \varepsilon_{i}\,] = \begin{bmatrix} \operatorname{E}[\,{x}_{j1} \cdot \varepsilon_{i}\,] \\ \operatorname{E}[\,{x}_{j2} \cdot \varepsilon_{i}\,] \\ \vdots \\ \operatorname{E}[\,{x}_{jk} \cdot \varepsilon_{i}\,] \end{bmatrix} = \mathbf{0} \quad \text{for all } i, j \in n This assumption is violated if the explanatory variables are
measured with error, or are
endogenous. Endogeneity can be the result of
simultaneity, where causality flows back and forth between both the dependent and independent variable.
Instrumental variable techniques are commonly used to address this problem.
Full rank The sample data matrix \mathbf{X} must have full column
rank. :\operatorname{rank}(\mathbf{X}) = k Otherwise \mathbf{X}^\operatorname{T} \mathbf{X} is not invertible and the OLS estimator cannot be computed. A violation of this assumption is
perfect multicollinearity, i.e. some explanatory variables are linearly dependent. One scenario in which this will occur is called "dummy variable trap," when a base dummy variable is not omitted resulting in perfect correlation between the dummy variables and the constant term. Multicollinearity (as long as it is not "perfect") can be present resulting in a less efficient, but still unbiased estimate. The estimates will be less precise and highly sensitive to particular sets of data. Multicollinearity can be detected from
condition number or the
variance inflation factor, among other tests.
Spherical errors The
outer product of the error vector must be spherical. :\operatorname{E}[\,\boldsymbol{\varepsilon} \boldsymbol{\varepsilon}^{\operatorname{T}} \mid \mathbf{X} ] = \operatorname{Var}[\,\boldsymbol{\varepsilon} \mid \mathbf{X} ] = \begin{bmatrix} \sigma^{2} & 0 & \cdots & 0 \\ 0 & \sigma^{2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \sigma^{2} \end{bmatrix} = \sigma^{2} \mathbf{I} \quad \text{with } \sigma^{2} > 0 This implies the error term has uniform variance (
homoscedasticity) and no
serial correlation. If this assumption is violated, OLS is still unbiased, but
inefficient. The term "spherical errors" will describe the
multivariate normal distribution: if \operatorname{Var}[\,\boldsymbol{\varepsilon}\mid \mathbf{X} ] = \sigma^{2} \mathbf{I} in the multivariate normal density, then the equation f(\varepsilon)=c is the formula for a
ball centered at μ with radius σ in n-dimensional space.
Heteroskedasticity occurs when the amount of error is correlated with an independent variable. For example, in a regression on food expenditure and income, the error is correlated with income. Low income people generally spend a similar amount on food, while high income people may spend a very large amount or as little as low income people spend. Heteroskedastic can also be caused by changes in measurement practices. For example, as statistical offices improve their data, measurement error decreases, so the error term declines over time. This assumption is violated when there is
autocorrelation. Autocorrelation can be visualized on a data plot when a given observation is more likely to lie above a fitted line if adjacent observations also lie above the fitted regression line. Autocorrelation is common in time series data where a data series may experience "inertia." If a dependent variable takes a while to fully absorb a shock. Spatial autocorrelation can also occur geographic areas are likely to have similar errors. Autocorrelation may be the result of misspecification such as choosing the wrong functional form. In these cases, correcting the specification is one possible way to deal with autocorrelation. When the spherical errors assumption may be violated, the generalized least squares estimator can be shown to be BLUE. ==See also==