The Fisher information is a way of measuring the amount of information that an observable
random variable X carries about an unknown
parameter \theta upon which the probability of X depends. Let f(X;\theta) be the
probability density function (or
probability mass function) for X conditioned on the value of \theta. It describes the probability that we observe a given outcome of X,
given a known value of \theta. If f is sharply peaked with respect to changes in \theta, it is easy to indicate the "correct" value of \theta from the data, or equivalently, that the data X provides a lot of information about the parameter \theta. If f is flat and spread-out, then it would take many samples of X to estimate the actual "true" value of \theta that
would be obtained using the entire population being sampled. This suggests studying some kind of variance with respect to \theta. Formally, the
partial derivative with respect to \theta of the
natural logarithm of the
likelihood function is called the
score. Under certain regularity conditions, if \theta is the true parameter (i.e. X is actually distributed as f(X;\theta)), it can be shown that the
expected value (the first
moment) of the score, evaluated at the true parameter value \theta, is 0: :\begin{align} \operatorname{E} \left[\left. \frac{\partial}{\partial\theta} \log f(X;\theta)\,\,\right|\,\,\theta \right] ={} &\int_{\mathbb{R}} \frac{\frac{\partial}{\partial\theta} f(x;\theta)}{f(x; \theta)} f(x;\theta)\,dx \\[6pt] ={} &\frac{\partial}{\partial\theta} \int_{\mathbb{R}} f(x; \theta)\,dx \\[6pt] ={} &\frac{\partial}{\partial\theta} 1 \\[6pt] ={} & 0. \end{align} The
Fisher information is defined to be the
variance of the score: : \mathcal{I}(\theta) = \operatorname{E} \left[\left. \left(\frac{\partial}{\partial\theta} \log f(X;\theta)\right)^2 \,\, \right| \,\, \theta \right] = \int_{\mathbb{R}} \left(\frac{\partial}{\partial\theta} \log f(x;\theta)\right)^2 f(x; \theta)\,dx, Note that \mathcal{I}(\theta) \geq 0. A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable
X has been averaged out. If is twice differentiable with respect to
θ, and under certain additional regularity conditions, then the Fisher information may also be written as : \mathcal{I}(\theta) = - \operatorname{E} \left[\left. \frac{\partial^2}{\partial\theta^2} \log f(X;\theta) \,\, \right| \,\, \theta \right], Begin by taking the second derivative of \log f(X;\theta): :\frac{\partial^2}{\partial\theta^2} \log f(X;\theta) = \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)} - \left( \frac{\frac{\partial}{\partial\theta} f(X;\theta)}{f(X; \theta)} \right)^2 = \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)} - \left( \frac{\partial}{\partial\theta} \log f(X;\theta)\right)^2 Now take the expectation value of each term on both sides. :\begin{align}\operatorname{E} \left[\left. \frac{\partial^2}{\partial\theta^2} \log f(X;\theta) \,\, \right| \,\, \theta \right] &=\operatorname{E}\left[\left. \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)} \,\, \right| \,\, \theta \right] - \operatorname{E}\left[\left. \left( \frac{\partial}{\partial\theta} \log f(X;\theta)\right)^2 \,\, \right| \,\, \theta \right] \\ \operatorname{E} \left[\left. \frac{\partial^2}{\partial\theta^2} \log f(X;\theta) \,\, \right| \,\, \theta \right] &=\operatorname{E}\left[\left. \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)} \,\, \right| \,\, \theta \right] - \mathcal{I}(\theta) \\ \mathcal{I}(\theta)&=-\operatorname{E}\left[\left. \frac{\partial^2}{\partial\theta^2} \log f(X;\theta) \,\, \right| \,\, \theta \right]+\operatorname{E}\left[\left. \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)} \,\, \right| \,\, \theta \right] \end{align} Next, we show that the last term is equal to 0. :\operatorname{E} \left[\left. \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)} \,\, \right| \,\, \theta \right]= \int_{\mathbb{R}} f(x;\theta) \frac{\frac{\partial^2}{\partial\theta^2} f(x;\theta)}{f(x; \theta)}\,dx = \frac{\partial^2}{\partial\theta^2} \int_{\mathbb{R}} f(x;\theta)\,dx = \frac{\partial^2}{\partial\theta^2} (1) = 0 Therefore, :\mathcal{I}(\theta)=-\operatorname{E}\left[\left. \frac{\partial^2}{\partial\theta^2} \log f(X;\theta) \,\, \right| \,\, \theta \right] Thus, the Fisher information may be seen as the curvature of the
support curve (the graph of the log-likelihood). Near the
maximum likelihood estimate, low Fisher information indicates that the maximum appears to be "blunt", that is, there are many points in the neighborhood that provide a similar log-likelihood. Conversely, a high Fisher information indicates that the maximum is "sharp".
Regularity conditions The regularity conditions are as follows: • The partial derivative of
f(
X;
θ) with respect to
θ exists
almost everywhere. (It can fail to exist on a null set, as long as this set does not depend on
θ.) • The integral of
f(
X;
θ) can be differentiated under the integral sign with respect to
θ. • The
support of
f(
X;
θ) does not depend on
θ. If
θ is a vector then the regularity conditions must hold for every component of
θ. It is easy to find an example of a density that does not satisfy the regularity conditions: The density of a Uniform(0,
θ) variable fails to satisfy conditions 1 and 3. In this case, even though the Fisher information can be computed from the definition, it will not have the properties it is typically assumed to have.
In terms of likelihood Because the
likelihood of
θ given
X is always proportional to the probability
f(
X;
θ), their logarithms necessarily differ by a constant that is independent of
θ, and the derivatives of these logarithms with respect to
θ are necessarily equal. Thus one can substitute in a log-likelihood
l(
θ;
X) instead of in the definitions of Fisher Information.
Samples of any size The value
X can represent a single sample drawn from a single distribution or can represent a collection of samples drawn from a collection of distributions. If there are
n samples and the corresponding
n distributions are
statistically independent then the Fisher information will necessarily be the sum of the single-sample Fisher information values, one for each single sample from its distribution. In particular, if the
n distributions are
independent and identically distributed then the Fisher information will necessarily be
n times the Fisher information of a single sample from the common distribution. Stated in other words, the Fisher Information of i.i.d. observations of a sample of size
n from a population is equal to the product of
n and the Fisher Information of a single observation from the same population.
Informal derivation of the Cramér–Rao bound The
Cramér–Rao bound states that the inverse of the Fisher information is a lower bound on the variance of any
unbiased estimator of
θ. and provide the following method of deriving the
Cramér–Rao bound, a result which describes use of the Fisher information. Informally, we begin by considering an
unbiased estimator \hat\theta(X). Mathematically, "unbiased" means that : \operatorname{E}\left[ \left. \hat\theta(X) - \theta \,\, \right| \,\, \theta \right] = \int \left(\hat\theta(x) - \theta\right) \, f(x ;\theta) \, dx = 0 \text{ regardless of the value of } \theta. This expression is zero independent of
θ, so its partial derivative with respect to
θ must also be zero. By the
product rule, this partial derivative is also equal to : 0 = \frac{\partial}{\partial\theta} \int \left(\hat\theta(x) - \theta \right) \, f(x ;\theta) \,dx = \int \left(\hat\theta(x)-\theta\right) \frac{\partial f}{\partial\theta} \, dx - \int f \,dx. For each
θ, the likelihood function is a probability density function, and therefore \int f\,dx = 1. By using the
chain rule on the partial derivative of \log f and then dividing and multiplying by f(x;\theta), one can verify that :\frac{\partial f}{\partial\theta} = f \, \frac{\partial \log f}{\partial\theta}. Using these two facts in the above, we get : \int \left(\hat\theta-\theta\right) f \, \frac{\partial \log f}{\partial\theta} \, dx = 1. Factoring the integrand gives : \int \left(\left(\hat\theta-\theta\right) \sqrt{f} \right) \left( \sqrt{f} \, \frac{\partial \log f}{\partial\theta} \right) \, dx = 1. Squaring the expression in the integral, the
Cauchy–Schwarz inequality yields : 1 = \biggl( \int \left[\left(\hat\theta-\theta\right) \sqrt{f} \right] \cdot \left[ \sqrt{f} \, \frac{\partial \log f}{\partial\theta} \right] \, dx \biggr)^2 \le \left[ \int \left(\hat\theta - \theta\right)^2 f \, dx \right] \cdot \left[ \int \left( \frac{\partial \log f}{\partial\theta} \right)^2 f \, dx \right]. The second bracketed factor is defined to be the Fisher Information, while the first bracketed factor is the
mean-squared error (MSE) of the estimator \hat\theta. Since the estimator is unbiased, its MSE equals its variance. By rearranging, the inequality tells us that : \operatorname{Var}(\hat\theta) \geq \frac{1}{\mathcal{I}\left(\theta\right)}. In other words, the precision to which we can estimate
θ is fundamentally limited by the Fisher information of the likelihood function. Alternatively, the same conclusion can be obtained directly from the
Cauchy–Schwarz inequality for random variables, |\operatorname{Cov}(A,B)|^2 \le \operatorname{Var}(A)\operatorname{Var}(B), applied to the random variables \hat\theta(X) and \partial_\theta\log f(X;\theta), and observing that for unbiased estimators we have\operatorname{Cov}[\hat\theta(X),\partial_\theta \log f(X;\theta)] = \int \hat\theta(x)\, \partial_\theta f(x;\theta)\, dx = \partial_\theta \operatorname E[\hat\theta] = 1. == Examples ==