The projection matrix has a number of useful algebraic properties. In the language of
linear algebra, the projection matrix is the
orthogonal projection onto the
column space of the design matrix \mathbf{X}. • \mathbf{X} is invariant under \mathbf{P} : \mathbf{P X} = \mathbf{X}, hence \left( \mathbf{I} - \mathbf{P} \right) \mathbf{X} = \mathbf{0}. • \left( \mathbf{I} - \mathbf{P} \right) \mathbf{P} = \mathbf{P} \left( \mathbf{I} - \mathbf{P} \right) = \mathbf{0}. • \mathbf{P} is unique for certain subspaces. The projection matrix corresponding to a
linear model is
symmetric and
idempotent, that is, \mathbf{P}^2 = \mathbf{P}. However, this is not always the case; in
locally weighted scatterplot smoothing (LOESS), for example, the hat matrix is in general neither symmetric nor idempotent. For
linear models, the
trace of the projection matrix is equal to the
rank of \mathbf{X}, which is the number of independent parameters of the linear model. For other models such as LOESS that are still linear in the observations \mathbf{y}, the projection matrix can be used to define the
effective degrees of freedom of the model. Practical applications of the projection matrix in regression analysis include
leverage and
Cook's distance, which are concerned with identifying
influential observations, i.e. observations which have a large effect on the results of a regression. == Blockwise formula ==