The field of numerical analysis includes many sub-disciplines. Some of the major ones are:
Computing values of functions One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the
Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control
round-off errors arising from the use of
floating-point arithmetic.
Interpolation, extrapolation, and regression Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?
Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found.
Regression is also similar, but it takes into account that the data are imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The
least squares-method is one way to achieve this.
Solving equations and systems of equations Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation 2x+5=3 is linear while 2x^2+5=3 is not. Much effort has been put in the development of methods for solving
systems of linear equations. Standard direct methods, i.e., methods that use some
matrix decomposition are
Gaussian elimination,
LU decomposition,
Cholesky decomposition for
symmetric (or
hermitian) and
positive-definite matrix, and
QR decomposition for non-square matrices. Iterative methods such as the
Jacobi method,
Gauss–Seidel method,
successive over-relaxation and
conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a
matrix splitting.
Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is
differentiable and the derivative is known, then Newton's method is a popular choice.
Linearization is another technique for solving nonlinear equations.
Solving eigenvalue or singular value problems Several important problems can be phrased in terms of
eigenvalue decompositions or
singular value decompositions. For instance, the
spectral image compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called
principal component analysis.
Optimization Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some
constraints. The field of optimization is further split in several subfields, depending on the form of the
objective function and the constraint. For instance,
linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the
simplex method. The method of
Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems.
Evaluating integrals Numerical integration, in some instances also known as numerical
quadrature, asks for the value of a definite
integral. Popular methods use one of the
Newton–Cotes formulas (like the midpoint rule or
Simpson's rule) or
Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use
Monte Carlo or
quasi-Monte Carlo methods (see
Monte Carlo integration), or, in modestly large dimensions, the method of
sparse grids.
Differential equations Numerical analysis is also concerned with computing (in an approximate way) the solution of
differential equations, both
ordinary differential equations and
partial differential equations. Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a
finite element method, a
finite difference method, or (particularly in engineering) a
finite volume method. The theoretical justification of these methods often involves theorems from
functional analysis. This reduces the problem to the solution of an algebraic equation. ==Software==