Mathematical functions have one or more
arguments that are designated in the definition by
variables. A function definition can also contain parameters, but unlike variables, parameters are not listed among the arguments that the function takes. When parameters are present, the definition actually defines a whole family of functions, one for every valid set of values of the parameters. For instance, one could define a general
quadratic function by declaring :f(x)=ax^2+bx+c; Here, the variable
x designates the function's argument, but
a,
b, and
c are parameters (in this instance, also called
coefficients) that determine which particular quadratic function is being considered. A parameter could be incorporated into the function name to indicate its dependence on the parameter. For instance, one may define the base-
b logarithm by the formula :\log_b(x)=\frac{\log(x)}{\log(b)} where
b is a parameter that indicates which logarithmic function is being used. It is not an argument of the function, and will, for instance, be a constant when considering the
derivative \textstyle\log_b'(x) = (x\ln(b))^{-1}. In some informal situations it is a matter of convention (or historical accident) whether some or all of the symbols in a function definition are called parameters. However, changing the status of symbols between parameter and variable changes the function as a mathematical object. For instance, the notation for the
falling factorial power :n^{\underline k}=n(n-1)(n-2)\cdots(n-k+1), defines a
polynomial function of
n (when
k is considered a parameter), but is not a polynomial function of
k (when
n is considered a parameter). Indeed, in the latter case, it is only defined for non-negative integer arguments. More formal presentations of such situations typically start out with a function of several variables (including all those that might sometimes be called "parameters") such as :(n,k) \mapsto n^{\underline{k}} as the most fundamental object being considered, then defining functions with fewer variables from the main one by means of
currying. Sometimes it is useful to consider all functions with certain parameters as
parametric family, i.e. as an
indexed family of functions. Examples from probability theory
are given further below.
Examples • In a section on frequently misused words in his book ''The Writer's Art
, James J. Kilpatrick quoted a letter from a correspondent, giving examples to illustrate the correct use of the word parameter'': W.M. Woods ... a mathematician ... writes ... "... a variable is one of the many things a
parameter is not." ... The dependent variable, the speed of the car, depends on the independent variable, the position of the gas pedal. [Kilpatrick quoting Woods] "Now ... the engineers ... change the lever arms of the linkage ... the speed of the car ... will still depend on the pedal position ...
but in a ... different manner. You have changed a parameter" • A
parametric equaliser is an
audio filter that allows the
frequency of maximum cut or boost to be set by one control, and the size of the cut or boost by another. These settings, the frequency level of the peak or trough, are two of the parameters of a frequency response curve, and in a two-control equaliser they completely describe the curve. More elaborate parametric equalisers may allow other parameters to be varied, such as skew. These parameters each describe some aspect of the response curve seen as a whole, over all frequencies. A
graphic equaliser provides individual level controls for various frequency bands, each of which acts only on that particular frequency band. • If asked to imagine the graph of the relationship
y =
ax2, one typically visualizes a range of values of
x, but only one value of
a. Of course a different value of
a can be used, generating a different relation between
x and
y. Thus
a is a parameter: it is less variable than the variable
x or
y, but it is not an explicit constant like the exponent 2. More precisely, changing the parameter
a gives a different (though related) problem, whereas the variations of the variables
x and
y (and their interrelation) are part of the problem itself. • In calculating income based on wage and hours worked (income equals wage multiplied by hours worked), it is typically assumed that the number of hours worked is easily changed, but the wage is more static. This makes
wage a parameter,
hours worked an
independent variable, and
income a
dependent variable.
Mathematical models In the context of a
mathematical model, such as a
probability distribution, the distinction between variables and parameters was described by Bard as follows: :We refer to the relations which supposedly describe a certain physical situation, as a
model. Typically, a model consists of one or more equations. The quantities appearing in the equations we classify into
variables and
parameters. The distinction between these is not always clear cut, and it frequently depends on the context in which the variables appear. Usually a model is designed to explain the relationships that exist among quantities which can be measured independently in an experiment; these are the variables of the model. To formulate these relationships, however, one frequently introduces "constants" which stand for inherent properties of nature (or of the materials and equipment used in a given experiment). These are the parameters.
Analytic geometry In
analytic geometry, a
curve can be described as the image of a function whose argument, typically called the
parameter, lies in a
real interval. For example, the
unit circle can be specified in the following two ways: •
implicit form, the curve is the locus of points in the
Cartesian plane that satisfy the relation x^2 + y^2 = 1. •
parametric form, the curve is the image of the function t \mapsto (\cos t, \sin t) with parameter t \in [0, 2\pi). As a
parametric equation this can be written (x,y)=(\cos t,\sin t). The parameter in this equation would elsewhere in mathematics be called the
independent variable.
Mathematical analysis In
mathematical analysis, integrals dependent on a parameter are often considered. These are of the form :F(t)=\int_{x_0(t)}^{x_1(t)}f(x;t)\,dx. In this formula,
t is the argument of the function
F, and on the right-hand side the
parameter on which the integral depends. When evaluating the integral,
t is held constant, and so it is considered to be a parameter. If we are interested in the value of
F for different values of
t, we then consider
t to be a variable. The quantity
x is a
dummy variable or
variable of integration (confusingly, also sometimes called a
parameter of integration).
Statistics and econometrics In
statistics and
econometrics, the probability framework above still holds, but attention shifts to
estimating the parameters of a distribution based on observed data, or
testing hypotheses about them. In
frequentist estimation parameters are considered "fixed but unknown", whereas in
Bayesian estimation they are treated as random variables, and their uncertainty is described as a distribution. In
estimation theory of statistics, "statistic" or
estimator refers to samples, whereas "parameter" or
estimand refers to populations, where the samples are taken from. A
statistic is a numerical characteristic of a sample that can be used as an estimate of the corresponding parameter, the numerical characteristic of the
population from which the sample was drawn. For example, the
sample mean (estimator), denoted \overline X, can be used as an estimate of the
mean parameter (estimand), denoted
μ, of the population from which the sample was drawn. Similarly, the
sample variance (estimator), denoted
S2, can be used to estimate the
variance parameter (estimand), denoted
σ2, of the population from which the sample was drawn. (Note that the sample standard deviation (
S) is not an unbiased estimate of the population standard deviation (
σ): see
Unbiased estimation of standard deviation.) It is possible to make statistical inferences without assuming a particular parametric family of
probability distributions. In that case, one speaks of
non-parametric statistics as opposed to the
parametric statistics just described. For example, a test based on
Spearman's rank correlation coefficient would be called non-parametric since the statistic is computed from the rank-order of the data disregarding their actual values (and thus regardless of the distribution they were sampled from), whereas those based on the
Pearson product-moment correlation coefficient are parametric tests since it is computed directly from the data values and thus estimates the parameter known as the
population correlation.
Probability theory In
probability theory, one may describe the
distribution of a
random variable as belonging to a
family of
probability distributions, distinguished from each other by the values of a finite number of
parameters. For example, one talks about "a
Poisson distribution with mean value λ". The function defining the distribution (the
probability mass function) is: :f(k;\lambda)=\frac{e^{-\lambda} \lambda^k}{k!}. This example nicely illustrates the distinction between constants, parameters, and variables.
e is
Euler's number, a fundamental
mathematical constant. The parameter λ is the
mean number of observations of some phenomenon in question, a property characteristic of the system.
k is a variable, in this case the number of occurrences of the phenomenon actually observed from a particular sample. If we want to know the probability of observing
k1 occurrences, we plug it into the function to get f(k_1 ; \lambda). Without altering the system, we can take multiple samples, which will have a range of values of
k, but the system is always characterized by the same λ. For instance, suppose we have a
radioactive sample that emits, on average, five particles every ten minutes. We take measurements of how many particles the sample emits over ten-minute periods. The measurements exhibit different values of
k, and if the sample behaves according to Poisson statistics, then each value of
k will come up in a proportion given by the probability mass function above. From measurement to measurement, however, λ remains constant at 5. If we do not alter the system, then the parameter λ is unchanged from measurement to measurement; if, on the other hand, we modulate the system by replacing the sample with a more radioactive one, then the parameter λ would increase. Another common distribution is the
normal distribution, which has as parameters the mean μ and the variance σ². In these above examples, the distributions of the random variables are completely specified by the type of distribution, i.e. Poisson or normal, and the parameter values, i.e. mean and variance. In such a case, we have a parameterized distribution. It is possible to use the sequence of
moments (mean, mean square, ...) or
cumulants (mean, variance, ...) as parameters for a probability distribution: see
Statistical parameter. ==Computer programming==