Optimization If is a
differentiable function on (or an
open interval) and is a
local maximum or a
local minimum of , then the derivative of at is zero. Points where are called
critical points or
stationary points (and the value of at is called a
critical value). If is not assumed to be everywhere differentiable, then points at which it fails to be differentiable are also designated critical points. If is twice differentiable, then conversely, a critical point of can be analysed by considering the
second derivative of at : • if it is positive, is a local minimum; • if it is negative, is a local maximum; • if it is zero, then could be a local minimum, a local maximum, or neither. (For example, has a critical point at , but it has neither a maximum nor a minimum there, whereas has a critical point at and a minimum and a maximum, respectively, there.) This is called the
second derivative test. An alternative approach, called the
first derivative test, involves considering the sign of the on each side of the critical point. Taking derivatives and solving for critical points is therefore often a simple way to find local minima or maxima, which can be useful in
optimization. By the
extreme value theorem, a continuous function on a
closed interval must attain its minimum and maximum values at least once. If the function is differentiable, the minima and maxima can only occur at critical points or endpoints. This also has applications in graph sketching: once the local minima and maxima of a differentiable function have been found, a rough plot of the graph can be obtained from the observation that it will be either increasing or decreasing between critical points. In
higher dimensions, a critical point of a
scalar valued function is a point at which the
gradient is zero. The
second derivative test can still be used to analyse critical points by considering the
eigenvalues of the
Hessian matrix of second partial derivatives of the function at the critical point. If all of the eigenvalues are positive, then the point is a local minimum; if all are negative, it is a local maximum. If there are some positive and some negative eigenvalues, then the critical point is called a "
saddle point", and if none of these cases hold (i.e., some of the eigenvalues are zero) then the test is considered to be inconclusive.
Calculus of variations One example of an optimization problem is: Find the shortest curve between two points on a surface, assuming that the curve must also lie on the surface. If the surface is a plane, then the shortest curve is a line. But if the surface is, for example, egg-shaped, then the
shortest path is not immediately clear. These paths are called
geodesics, and one of the most fundamental problems in the calculus of variations is finding geodesics. Another example is: Find the smallest area surface filling in a closed curve in space. This surface is called a
minimal surface and it, too, can be found using the calculus of variations.
Physics Calculus is of vital importance in physics: many physical processes are described by equations involving derivatives, called
differential equations. Physics is particularly concerned with the way quantities change and develop over time, and the concept of the
time derivative — the rate of change over time — is essential for the precise definition of several important concepts. In particular, the time derivatives of an object's position are significant in
Newtonian physics: •
velocity is the derivative (with respect to time) of an object's displacement (distance from the original position) •
acceleration is the derivative (with respect to time) of an object's velocity, that is, the second derivative (with respect to time) of an object's position. For example, if an object's position on a line is given by : x(t) = -16t^2 + 16t + 32 , \,\! then the object's velocity is : \dot x(t) = x'(t) = -32t + 16, \,\! and the object's acceleration is : \ddot x(t) = x''(t) = -32, \,\! which is constant.
Differential equations A differential equation is a relation between a collection of functions and their derivatives. An
ordinary differential equation is a differential equation that relates functions of one variable to their derivatives with respect to that variable. A
partial differential equation is a differential equation that relates functions of more than one variable to their
partial derivatives. Differential equations arise naturally in the physical sciences, in mathematical modelling, and within mathematics itself. For example,
Newton's second law, which describes the relationship between acceleration and force, can be stated as the ordinary differential equation :F(t) = m\frac{d^2x}{dt^2}. The
heat equation in one space variable, which describes how heat diffuses through a straight rod, is the partial differential equation :\frac{\partial u}{\partial t} = \alpha\frac{\partial^2 u}{\partial x^2}. Here is the temperature of the rod at position and time and is a constant that depends on how fast heat diffuses through the rod.
Mean value theorem File:Mvt2.svg|thumb|The mean value theorem: For each differentiable function f:[a,b]\to\R with a there is a c\in(a,b) with f'(c) = \tfrac{f(b) - f(a)}{b - a}. The mean value theorem gives a relationship between values of the derivative and values of the original function. If is a real-valued function and and are numbers with , then the mean value theorem says that under mild hypotheses, the slope between the two points and is equal to the slope of the tangent line to at some point between and . In other words, :f'(c) = \frac{f(b) - f(a)}{b - a}. In practice, what the mean value theorem does is control a function in terms of its derivative. For instance, suppose that has derivative equal to zero at each point. This means that its tangent line is horizontal at every point, so the function should also be horizontal. The mean value theorem proves that this must be true: The slope between any two points on the graph of must equal the slope of one of the tangent lines of . All of those slopes are zero, so any line from one point on the graph to another point will also have slope zero. But that says that the function does not move up or down, so it must be a horizontal line. More complicated conditions on the derivative lead to less precise but still highly useful information about the original function.
Taylor polynomials and Taylor series The derivative gives the best possible linear approximation of a function at a given point, but this can be very different from the original function. One way of improving the approximation is to take a quadratic approximation. That is to say, the linearization of a real-valued function at the point is a linear
polynomial , and it may be possible to get a better approximation by considering a quadratic polynomial . Still better might be a cubic polynomial , and this idea can be extended to arbitrarily high degree polynomials. For each one of these polynomials, there should be a best possible choice of coefficients , , , and that makes the approximation as good as possible. In the
neighbourhood of , for the best possible choice is always , and for the best possible choice is always . For , , and higher-degree coefficients, these coefficients are determined by higher derivatives of . should always be , and should always be . Using these coefficients gives the
Taylor polynomial of . The Taylor polynomial of degree is the polynomial of degree which best approximates , and its coefficients can be found by a generalization of the above formulas.
Taylor's theorem gives a precise bound on how good the approximation is. If is a polynomial of degree less than or equal to , then the Taylor polynomial of degree equals . The limit of the Taylor polynomials is an infinite series called the
Taylor series. The Taylor series is frequently a very good approximation to the original function. Functions which are equal to their Taylor series are called
analytic functions. It is impossible for functions with discontinuities or sharp corners to be analytic; moreover, there exist
smooth functions which are also not analytic.
Implicit function theorem Some natural geometric shapes, such as
circles, cannot be drawn as the
graph of a function. For instance, if , then the circle is the set of all pairs such that . This set is called the zero set of , and is not the same as the graph of , which is a
paraboloid. The implicit function theorem converts relations such as into functions. It states that if is
continuously differentiable, then around most points, the zero set of looks like graphs of functions pasted together. The points where this is not true are determined by a condition on the derivative of . The circle, for instance, can be pasted together from the graphs of the two functions . In a neighborhood of every point on the circle except and , one of these two functions has a graph that looks like the circle. (These two functions also happen to meet and , but this is not guaranteed by the implicit function theorem.) The implicit function theorem is closely related to the
inverse function theorem, which states when a function looks like graphs of
invertible functions pasted together. == See also ==