of normal distributions, an example of unimodal distribution. In
statistics, a
unimodal probability distribution or
unimodal distribution is a
probability distribution which has a single peak. The term "mode" in this context refers to any peak of the distribution, not just to the strict definition of
mode which is usual in statistics. If there is a single mode, the distribution function is called "unimodal". If it has more modes it is "bimodal" (2), "trimodal" (3), etc., or in general, "multimodal". Figure 1 illustrates
normal distributions, which are unimodal. Other examples of unimodal distributions include
Cauchy distribution,
Student's t-distribution,
chi-squared distribution and
exponential distribution. Among discrete distributions, the
binomial distribution and
Poisson distribution can be seen as unimodal, though for some parameters they can have two adjacent values with the same probability. Figure 2 and Figure 3 illustrate bimodal distributions.
Other definitions Other definitions of unimodality in distribution functions also exist. In continuous distributions, unimodality can be defined through the behavior of the
cumulative distribution function (cdf). If the cdf is
convex for
x m, then the distribution is unimodal,
m being the mode. Note that under this definition the
uniform distribution is unimodal, as well as any other distribution in which the maximum distribution is achieved for a range of values, e.g. trapezoidal distribution. Usually this definition allows for a discontinuity at the mode; usually in a continuous distribution the probability of any single value is zero, while this definition allows for a non-zero probability, or an "atom of probability", at the mode. Criteria for unimodality can also be defined through the
characteristic function of the distribution Another way to define a unimodal discrete distribution is by the occurrence of sign changes in the sequence of differences of the probabilities. A discrete distribution with a
probability mass function, \{p_n : n = \dots, -1, 0, 1, \dots\}, is called unimodal if the sequence \dots, p_{-2} - p_{-1}, p_{-1} - p_0, p_0 - p_1, p_1 - p_2, \dots has exactly one sign change (when zeroes don't count).
Uses and results One reason for the importance of distribution unimodality is that it allows for several important results. Several
inequalities are given below which are only valid for unimodal distributions. Thus, it is important to assess whether or not a given data set comes from a unimodal distribution. Several tests for unimodality are given in the article on
multimodal distribution.
Inequalities Gauss's inequality A first important result is
Gauss's inequality. Gauss's inequality gives an upper bound on the probability that a value lies more than any given distance from its mode. This inequality depends on unimodality.
Vysochanskiï–Petunin inequality A second is the
Vysochanskiï–Petunin inequality, a refinement of the
Chebyshev inequality. The Chebyshev inequality guarantees that in any probability distribution, "nearly all" the values are "close to" the mean value. The Vysochanskiï–Petunin inequality refines this to even nearer values, provided that the distribution function is continuous and unimodal. Further results were shown by Sellke and Sellke.
Mode, median and mean Gauss also showed in 1823 that for a unimodal distribution : \sigma \le \omega \le 2 \sigma and : |\nu - \mu| \le \sqrt{\frac{3}{4}} \omega , where the
median is
ν, the mean is
μ and
ω is the
root mean square deviation from the mode. It can be shown for a unimodal distribution that the median
ν and the mean
μ lie within (3/5)1/2 ≈ 0.7746
standard deviations of each other. In symbols, : \frac{\sigma} \le \sqrt{\frac{3}{5}} where | . | is the
absolute value. In 2020, Bernard, Kazzi, and Vanduffel generalized the previous inequality by deriving the maximum distance between the symmetric quantile average \frac{ q_\alpha + q_{(1-\alpha)} }{ 2 } and the mean, : \frac{ \left| \frac{ q_\alpha + q_{(1-\alpha)} }{2} - \mu \right| }{ \sigma } \le \left\{ \begin{array}{cl} \frac{\sqrt[]{\frac{4}{9(1-\alpha)}-1} \text{ } + \text{ } \sqrt[]{\frac{1-\alpha}{1/3+\alpha}}}{2} & \text{for }\alpha \in \left[\frac{5}{6},1\right)\!, \\ \frac{\sqrt[]{\frac{3 \alpha}{4-3\alpha}} \text{ } + \text{ } \sqrt[]{\frac{1-\alpha}{1/3+\alpha}}}{2} & \text{for }\alpha \in \left(\frac{1}{6},\frac{5}{6}\right)\!,\\ \frac{\sqrt[]{\frac{3 \alpha}{4-3\alpha}} \text{ } + \text{ } \sqrt[]{\frac{4}{9 \alpha} -1}}{2} & \text{for }\alpha \in \left(0,\frac{1}{6}\right]\!. \end{array} \right. The maximum distance is minimized at \alpha=0.5 (i.e., when the symmetric quantile average is equal to q_{0.5} = \nu), which indeed motivates the common choice of the median as a robust estimator for the mean. Moreover, when \alpha = 0.5, the bound is equal to \sqrt{3/5}, which is the maximum distance between the median and the mean of a unimodal distribution. A similar relation holds between the median and the mode
θ: they lie within 31/2 ≈ 1.732 standard deviations of each other: : \frac{\sigma} \le \sqrt{3}. It can also be shown that the mean and the mode lie within 31/2 of each other: : \frac{\sigma} \le \sqrt{3}.
Skewness and kurtosis Rohatgi and Szekely claimed that the
skewness and
kurtosis of a unimodal distribution are related by the inequality: : \gamma^2 - \kappa \le \frac{ 6 }{ 5 } = 1.2 where
κ is the kurtosis and
γ is the skewness. Klaassen, Mokveld, and van Es showed that this only applies in certain settings, such as the set of unimodal distributions where the mode and mean coincide. They derived a weaker inequality which applies to all unimodal distributions: : \gamma^2 - \kappa \le \frac{ 186 }{ 125 } = 1.488 This bound is sharp, as it is reached by the equal-weights mixture of the uniform distribution on [0,1] and the discrete distribution at {0}. ==Unimodal function==