Optics Fermat's principle states that light takes a path that (locally) minimizes the optical length between its endpoints. If the x-coordinate is chosen as the parameter along the path, and y=f(x) along the path, then the optical length is given by A[f] = \int_{x_0}^{x_1} n(x,f(x)) \sqrt{1 + f'(x)^2} dx, where the refractive index n(x,y) depends upon the material. If we try f(x) = f_0 (x) + \varepsilon f_1 (x) then the
first variation of A (the derivative of A with respect to \varepsilon) is \delta A[f_0,f_1] = \int_{x_0}^{x_1} \left[ \frac{ n(x,f_0) f_0'(x) f_1'(x)}{\sqrt{1 + f_0'(x)^2}} + n_y (x,f_0) f_1 \sqrt{1 + f_0'(x)^2} \right] dx. After integration by parts of the first term within brackets, we obtain the Euler–Lagrange equation -\frac{d}{dx} \left[\frac{ n(x,f_0) f_0'}{\sqrt{1 + f_0'^2}} \right] + n_y (x,f_0) \sqrt{1 + f_0'(x)^2} = 0. The light rays may be determined by integrating this equation. This formalism is used in the context of
Lagrangian optics and
Hamiltonian optics.
Snell's law There is a discontinuity of the refractive index when light enters or leaves a lens. Let n(x,y) = \begin{cases} n_{(-)} & \text{if} \quad x0, \end{cases} where n_{(-)} and n_{(+)} are constants. Then the Euler–Lagrange equation holds as before in the region where x or x > 0, and in fact the path is a straight line there, since the refractive index is constant. At the x = 0, f must be continuous, but f' may be discontinuous. After integration by parts in the separate regions and using the Euler–Lagrange equations, the first variation takes the form \delta A[f_0,f_1] = f_1(0)\left[ n_{(-)}\frac{f_0'(0^-)}{\sqrt{1 + f_0'(0^-)^2}} - n_{(+)}\frac{f_0'(0^+)}{\sqrt{1 + f_0'(0^+)^2}} \right]. The factor multiplying n_{(-)} is the sine of angle of the incident ray with the x axis, and the factor multiplying n_{(+)} is the sine of angle of the refracted ray with the x axis.
Snell's law for refraction requires that these terms be equal. As this calculation demonstrates, Snell's law is equivalent to vanishing of the first variation of the optical path length.
Fermat's principle in three dimensions It is expedient to use vector notation: let X = (x_1,x_2,x_3), let t be a parameter, let X(t) be the parametric representation of a curve C, and let \dot X(t) be its tangent vector. The optical length of the curve is given by A[C] = \int_{t_0}^{t_1} n(X) \sqrt{ \dot X \cdot \dot X} \, dt. Note that this integral is invariant with respect to changes in the parametric representation of C. The Euler–Lagrange equations for a minimizing curve have the symmetric form \frac{d}{dt} P = \sqrt{ \dot X \cdot \dot X} \, \nabla n, where P = \frac{n(X) \dot X}{\sqrt{\dot X \cdot \dot X} }. It follows from the definition that P satisfies P \cdot P = n(X)^2. Therefore, the integral may also be written as A[C] = \int_{t_0}^{t_1} P \cdot \dot X \, dt. This form suggests that if we can find a function \psi whose gradient is given by P, then the integral A is given by the difference of \psi at the endpoints of the interval of integration. Thus the problem of studying the curves that make the integral stationary can be related to the study of the level surfaces of \psi. In order to find such a function, we turn to the wave equation, which governs the propagation of light. This formalism is used in the context of
Lagrangian optics and
Hamiltonian optics.
Connection with the wave equation The
wave equation for an inhomogeneous medium is u_{tt} = c^2 \nabla \cdot \nabla u, where c is the velocity, which generally depends upon X. Wave fronts for light are characteristic surfaces for this partial differential equation: they satisfy \varphi_t^2 = c(X)^2 \, \nabla \varphi \cdot \nabla \varphi. We may look for solutions in the form \varphi(t,X) = t - \psi(X). In that case, \psi satisfies \nabla \psi \cdot \nabla \psi = n^2, where n=1/c. According to the theory of
first-order partial differential equations, if P = \nabla \psi, then P satisfies \frac{dP}{ds} = n \, \nabla n, along a system of curves (
the light rays) that are given by \frac{dX}{ds} = P. These equations for solution of a first-order partial differential equation are identical to the Euler–Lagrange equations if we make the identification \frac{ds}{dt} = \frac{\sqrt{ \dot X \cdot \dot X} }{n}. We conclude that the function \psi is the value of the minimizing integral A as a function of the upper end point. That is, when a family of minimizing curves is constructed, the values of the optical length satisfy the characteristic equation corresponding the wave equation. Hence, solving the associated partial differential equation of first order is equivalent to finding families of solutions of the variational problem. This is the essential content of the
Hamilton–Jacobi theory, which applies to more general variational problems.
Mechanics In classical mechanics, the action, S, is defined as the time integral of the Lagrangian, L. The Lagrangian is the difference of energies, L = T - U, where T is the
kinetic energy of a mechanical system and U its
potential energy.
Hamilton's principle (or the action principle) states that the motion of a conservative holonomic (integrable constraints) mechanical system is such that the action integral S = \int_{t_0}^{t_1} L(x, \dot x, t) \, dt is stationary with respect to variations in the path x(t). The Euler–Lagrange equations for this system are known as Lagrange's equations: \frac{d}{dt} \frac{\partial L}{\partial \dot x} = \frac{\partial L}{\partial x}, and they are equivalent to Newton's equations of motion (for such systems). The conjugate momenta P are defined by p = \frac{\partial L}{\partial \dot x}. For example, if T = \frac{1}{2} m \dot x^2, then p = m \dot x.
Hamiltonian mechanics results if the conjugate momenta are introduced in place of \dot x by a Legendre transformation of the Lagrangian L into the Hamiltonian H defined by H(x, p, t) = p \,\dot x - L(x,\dot x, t). The Hamiltonian is the total energy of the system: H = T + U. Analogy with Fermat's principle suggests that solutions of Lagrange's equations (the particle trajectories) may be described in terms of level surfaces of some function of X. This function is a solution of the
Hamilton–Jacobi equation: \frac{\partial \psi}{\partial t} + H\left(x,\frac{\partial \psi}{\partial x},t\right) = 0.
Further applications Further applications of the calculus of variations include the following: • The derivation of the
catenary shape • Solution to
Newton's minimal resistance problem • Solution to the
brachistochrone problem • Solution to the
tautochrone problem • Solution to
isoperimetric problems • Calculating
geodesics • Finding
minimal surfaces and solving
Plateau's problem •
Optimal control •
Analytical mechanics, or reformulations of Newton's laws of motion, most notably
Lagrangian and
Hamiltonian mechanics; • Geometric optics, especially Lagrangian and
Hamiltonian optics; •
Variational method (quantum mechanics), one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states; •
Variational Bayesian methods, a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning; •
Variational methods in general relativity, a family of techniques using calculus of variations to solve problems in Einstein's general theory of relativity; •
Finite element method is a variational method for finding numerical solutions to boundary-value problems in differential equations; •
Total variation denoising, an
image processing method for filtering high variance or noisy signals. == Variations and sufficient condition for a minimum ==