Reshaping the distribution Uniform distributions Most random number generators natively work with integers or individual bits, so an extra step is required to arrive at the
canonical uniform distribution between 0 and 1. The implementation is not as trivial as dividing the integer by its maximum possible value. Specifically: • The integer used in the transformation must provide enough bits for the intended precision. • The nature of floating-point math itself means there exists more precision the closer the number is to zero. This extra precision is usually not used due to the sheer number of bits required. • Rounding error in division may bias the result. At worst, a supposedly excluded bound may be drawn contrary to expectations based on real-number math. The mainstream algorithm, used by
OpenJDK,
Rust, and
NumPy, is described in a proposal for
C++'s STL. It does not use the extra precision and suffers from bias only in the last bit due to
rounding half to even. Other numeric concerns are warranted when shifting this
canonical uniform distribution to a different range. A proposed method for the
Swift programming language claims to use the full precision everywhere. Uniformly distributed integers are commonly used in algorithms such as the
Fisher–Yates shuffle. Again, a naive implementation may induce a modulo bias into the result, so more involved algorithms must be used. A method that nearly never performs division was described in 2018 by Daniel Lemire, with the current state-of-the-art being the arithmetic encoding-inspired 2021 "optimal algorithm" by Stephen Canon of
Apple Inc. Most 0 to 1 RNGs include 0 but exclude 1, while others include or exclude both.
Other distributions Given a source of uniform random numbers, there are a couple of methods to create a new random source that corresponds to a
probability density function. One method called the
inversion method, involves integrating up to an area greater than or equal to the random number (which should be generated between 0 and 1 for proper distributions). A second method called the
acceptance-rejection method, involves choosing an x and y value and testing whether the function of x is greater than the y value. If it is, the x value is accepted. Otherwise, the x value is rejected and the algorithm tries again. As an example for rejection sampling, to generate a pair of
statistically independent standard normally distributed random numbers (
x,
y), one may first generate the
polar coordinates (
r,
θ), where
r2~
χ22 and
θ~
UNIFORM(0,2π) (see
Box–Muller transform).
Whitening The outputs of multiple independent RNGs can be combined (for example, using a bit-wise
XOR operation) to provide a combined RNG at least as good as the best RNG used. This is referred to as
software whitening. Computational and hardware random number generators are sometimes combined to reflect the benefits of both kinds. Computational random number generators can typically generate pseudorandom numbers much faster than physical generators, while physical generators can generate true randomness. == Low-discrepancy sequences as an alternative ==