MarketSeven states of randomness
Company Profile

Seven states of randomness

The seven states of randomness in probability theory, fractals and risk analysis are extensions of the concept of randomness as modeled by the normal distribution. These seven states were first introduced by Benoît Mandelbrot in his 1997 book Fractals and Scaling in Finance, which applied fractal analysis to the study of risk and randomness. This classification builds upon the three main states of randomness: mild, slow, and wild.

History
These seven states build on earlier work of Mandelbrot in 1963: "The variations of certain speculative prices" and "New methods in statistical economics" in which he argued that most statistical models approached only a first stage of dealing with indeterminism in science, and that they ignored many aspects of real world turbulence, in particular, most cases of financial modeling. This was then presented by Mandelbrot in the International Congress for Logic (1964) in an address titled "The Epistemology of Chance in Certain Newer Sciences" Intuitively speaking, Mandelbrot argued Using elements of this distinction, in March 2006, before the 2008 financial crisis, and four years before the 2010 Flash Crash, during which the Dow Jones Industrial Average had a 1,000 point intraday swing within minutes, Mandelbrot and Nassim Taleb published an article in the Financial Times arguing that the traditional "bell curves" that have been in use for over a century are inadequate for measuring risk in financial markets, given that such curves disregard the possibility of sharp jumps or discontinuities. Contrasting this approach with the traditional approaches based on random walks, they stated: We live in a world primarily driven by random jumps, and tools designed for random walks address the wrong problem. Mandelbrot and Taleb pointed out that although one can assume that the odds of finding a person who is several miles tall are extremely low, similar excessive observations cannot be excluded in other areas of application. They argued that while traditional bell curves may provide a satisfactory representation of height and weight in the population, they do not provide a suitable modeling mechanism for market risks or returns, where just ten trading days represent 63 per cent of the returns between 1956 and 2006. ==Definitions==
Definitions
Doubling convolution If the probability density of U=U'+U'' is denoted p_2 (u), then it can be obtained by the double convolution p_2 (x) = \int p(u) p(x-u)\,du. Short run portioning ratio When u is known, the conditional probability density of u′ is given by the portioning ratio: :\frac{p(u')p(u-u')}{p_{2}(u)} Concentration in mode In many important cases, the maximum of p(u')p(u-u') occurs near u'=u/2, or near u'=0 and u'=u. Take the logarithm of p(u')p(u-u') and write: : \Delta(u)=2 \log p(u/2)-[\log p(0) +\log p(u)] • If \log p(u) is cap-convex, the portioning ratio is maximal for u'=u/2 • If \log p(u) is straight, the portioning ratio is constant • If \log p(u) is cup-convex, the portioning ratio is minimal for u'=u/2 Concentration in probability Splitting the doubling convolution into three parts gives: :p_2(x)=\int_0^x p(u)p(x-u) \, du=\left \{ \int_0^{\tilde x} + \int_{\tilde x}^{x- \tilde x} + \int_{x- \tilde x}^{x} \right \} p(u)p(x-u) \, du = I_L+I_0+I_R p(u) is short-run concentrated in probability if it is possible to select \tilde u(u) so that the middle interval of (\tilde u, u-\tilde u) has the following two properties as u→∞: • I0/p2(u) → 0 • (u-2 \tilde u) u does not → 0 Localized and delocalized moments Consider the formula \operatorname{E}[U^{q}] = \int_0^\infty u^q p(u) \, du, if p(u) is the scaling distribution the integrand is maximum at 0 and ∞, on other cases the integrand may have a sharp global maximum for some value \tilde u_q defined by the following equation: :0=\frac{d}{du} (q \log u + \log p(u))=\frac{q}{u}-\left|\frac{d \log p(u)}{du}\right| One must also know u^q p(u) in the neighborhood of \tilde u_q. The function u^{q}p(u) often admits a "Gaussian" approximation given by: :\log[u^q p(u)]=\log p(u) +qu = \text{constant}-(u-\tilde u_q)^2 \tilde \sigma^{-2/2}_q When u^qp(u) is well-approximated by a Gaussian density, the bulk of \operatorname{E}[U^{q}] originates in the "q-interval" defined as [\tilde u_q-\tilde \sigma_q,\tilde u_q+\tilde \sigma_q]. The Gaussian q-intervals greatly overlap for all values of \sigma. The Gaussian moments are called delocalized. The lognormal's q-intervals are uniformly spaced and their width is independent of q; therefore, if the log-normal is sufficiently skew, the q-interval and (q + 1)-interval do not overlap. The lognormal moments are called uniformly localized. In other cases, neighboring q-intervals cease to overlap for sufficiently high q, such moments are called asymptotically localized. ==See also==
tickerdossier.comtickerdossier.substack.com