MarketMarchenko–Pastur distribution
Company Profile

Marchenko–Pastur distribution

In the mathematical theory of random matrices, the Marchenko–Pastur distribution, or Marchenko–Pastur law, describes the asymptotic behavior of singular values of large rectangular random matrices. The theorem is named after Soviet Ukrainian mathematicians Volodymyr Marchenko and Leonid Pastur who proved this result in 1967.

Singular value bounds in the large system limit
As the dimensions of a random matrix \mathbf X grow larger, the max/min singular values converge to \|\mathbf X\|_F \left(\frac {1} {\sqrt{\min(m,n)}} \pm \frac 1 {\sqrt{\max(m,n)}} \right) . These are useful approximations of singular value bounds for large matrices. For matrices of finite size as are typically encountered, they are more what you'd call "guidelines" than actual rules. == Moments ==
Moments
For each k \geq 1, its k-th moment is :\sum_{r=0}^{k-1}\frac{\sigma^{2k}}{r+1}\binom{k}{r}\binom{k-1}{r} \lambda^{r} = \frac{\sigma^{2k}}{k}\sum_{r=0}^{k-1}\binom{k}{r}\binom{k}{r+1} \lambda^{r} ==Some transforms of this law==
Some transforms of this law
The Stieltjes transform is given by : s(z)=\frac{\sigma^2 (1-\lambda)-z - \sqrt{(z- \sigma^2(\lambda + 1))^2-4\lambda \sigma^4}}{2\lambda z \sigma^2} for complex numbers of positive imaginary part, where the complex square root is also taken to have positive imaginary part. It satisfies the quadratic equation : \lambda \sigma^2 z s(z)^2+\left(z-\sigma^2(1-\lambda)\right) s(z)+1=0. The Stieltjes transform can be repackaged in the form of the R-transform, which is given by : R(z)=\frac{\sigma^2}{1-\sigma^2 \lambda z} The S-transform is given by : S(z)=\frac{1}{\sigma^2 (1 + \lambda z)}. For the case of \sigma=1, the \eta-transform is given by \mathbb{E}\frac{1}{1+\gamma X} where X satisfies the Marchenko-Pastur law. : \eta(\gamma)= 1 - \frac{\mathcal{F}(\gamma,\lambda)}{4\gamma\lambda} where \mathcal{F}(x,z)=\left(\sqrt{x(1+\sqrt{z})^2+1}-\sqrt{x(1-\sqrt{z})^2+1}\right)^2 For exact analysis of high dimensional regression in the proportional asymptotic regime, a convenient form is often T(u):=\eta\left(\tfrac1u\right) which simplifies to : T(u)= \frac{-1+\lambda-u+\sqrt{(1+u-\lambda)^2+4u\lambda}}{2\lambda} The following functions B(u):=\mathbb{E}\left(\frac{u}{X+u}\right)^2 and V(u):=\frac{X}{(X+u)^2}, where X satisfies the Marchenko-Pastur law, show up in the limiting Bias and Variance respectively, of ridge regression and other regularized linear regression problems. One can show that B(u)=T(u)-u\cdot T'(u) and V(u)= T'(u). ==Application to correlation matrices==
Application to correlation matrices
For the special case of correlation matrices, we know that \sigma^2=1 and \lambda=m/n. This bounds the probability mass over the interval defined by : \lambda_{\pm} = \left(1 \pm \sqrt{\frac m n}\right)^2. Since this distribution describes the spectrum of random matrices with mean 0, the eigenvalues of correlation matrices that fall inside of the aforementioned interval could be considered spurious or noise. For instance, obtaining a correlation matrix of 10 stock returns calculated over a 252 trading days period would render \lambda_+=\left(1+\sqrt{\frac{10}{252}}\right)^2\approx 1.43. Thus, out of 10 eigenvalues of said correlation matrix, only the values higher than 1.43 would be considered significantly different from random. ==See also ==
tickerdossier.comtickerdossier.substack.com