It is common for probability density functions (and
probability mass functions) to be parametrized—that is, to be characterized by unspecified
parameters. For example, the
normal distribution is parametrized in terms of the
mean and the
variance, denoted by \mu and \sigma^2 respectively, giving the family of densities f(x;\mu,\sigma^2) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2 }. Different values of the parameters describe different distributions of different
random variables on the same
sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the
normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of
something in the domain occurring— equals 1). This normalization factor is outside the
kernel of the distribution. Since the parameters are constants, reparametrizing a density in terms of different parameters to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones. == Densities associated with multiple variables == For continuous
random variables , it is also possible to define a probability density function associated to the set as a whole, often called
joint probability density function. This density function is defined as a function of the variables, such that, for any domain in the -dimensional space of the values of the variables , the probability that a realisation of the set variables falls inside the domain is \Pr \left( X_1,\ldots,X_n \isin D \right) = \int_D f_{X_1,\ldots,X_n}(x_1,\ldots,x_n)\,dx_1 \cdots dx_n. If is the
cumulative distribution function of the vector , then the joint probability density function can be computed as a
partial derivative f(x) = \left.\frac{\partial^n F}{\partial x_1 \cdots \partial x_n} \right|_x
Marginal densities For , let be the probability density function associated with variable alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables by integrating over all values of the other variables: f_{X_i}(x_i) = \int f(x_1,\ldots,x_n)\, dx_1 \cdots dx_{i-1}\,dx_{i+1}\cdots dx_n .
Independence Continuous random variables admitting a joint density are all
independent from each other if f_{X_1,\ldots,X_n}(x_1,\ldots,x_n) = f_{X_1}(x_1)\cdots f_{X_n}(x_n).
Corollary If the joint probability density function of a vector of random variables can be factored into a product of functions of one variable f_{X_1,\ldots,X_n}(x_1,\ldots,x_n) = f_1(x_1)\cdots f_n(x_n), (where each is not necessarily a density) then the variables in the set are all
independent from each other, and the marginal probability density function of each of them is given by f_{X_i}(x_i) = \frac{f_i(x_i)}{\int f_i(x)\,dx}.
Example This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call \vec R a 2-dimensional
random vector of coordinates : the probability to obtain \vec R in the quarter plane of positive and is \Pr \left( X > 0, Y > 0 \right) = \int_0^\infty \int_0^\infty f_{X,Y}(x,y)\,dx\,dy. ==Function of random variables and change of variables in the probability density function==