•
Convex programming studies the case when the objective function is
convex (minimization) or
concave (maximization) and the constraint set is
convex. This can be viewed as a particular case of nonlinear programming or as generalization of linear or convex quadratic programming. •
Linear programming (LP), a type of convex programming, studies the case in which the objective function
f is linear and the constraints are specified using only linear equalities and inequalities. Such a constraint set is called a
polyhedron or a
polytope if it is
bounded. •
Second-order cone programming (SOCP) is a convex program, and includes certain types of quadratic programs. •
Semidefinite programming (SDP) is a subfield of convex optimization where the underlying variables are
semidefinite matrices. It is a generalization of linear and convex quadratic programming. •
Conic programming is a general form of convex programming. LP, SOCP and SDP can all be viewed as conic programs with the appropriate type of cone. •
Geometric programming is a technique whereby objective and inequality constraints expressed as
posynomials and equality constraints as
monomials can be transformed into a convex program. •
Integer programming studies linear programs in which some or all variables are constrained to take on
integer values. This is not convex, and in general much more difficult than regular linear programming. •
Quadratic programming allows the objective function to have quadratic terms, while the feasible set must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming. •
Fractional programming studies optimization of ratios of two nonlinear functions. The special class of concave fractional programs can be transformed to a convex optimization problem. •
Nonlinear programming studies the general case in which the objective function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, whether the program is convex affects the difficulty of solving it. •
Stochastic programming studies the case in which some of the constraints or parameters depend on
random variables. •
Robust optimization is, like stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. Robust optimization aims to find solutions that are valid under all possible realizations of the uncertainties defined by an uncertainty set. •
Combinatorial optimization is concerned with problems where the set of feasible solutions is discrete or can be reduced to a
discrete one. •
Stochastic optimization is used with random (noisy) function measurements or random inputs in the search process. •
Infinite-dimensional optimization studies the case when the set of feasible solutions is a subset of an infinite-
dimensional space, such as a space of functions. •
Heuristics and
metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to find approximate solutions for many complicated optimization problems. •
Constraint satisfaction studies the case in which the objective function
f is constant (this is used in
artificial intelligence, particularly in
automated reasoning). •
Constraint programming is a programming paradigm wherein relations between variables are stated in the form of constraints. • Disjunctive programming is used where at least one constraint must be satisfied but not all. It is of particular use in scheduling. •
Space mapping is a concept for modeling and optimization of an engineering system to high-fidelity (fine) model accuracy exploiting a suitable physically meaningful coarse or
surrogate model. In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time): •
Calculus of variations is concerned with finding the best way to achieve some goal, such as finding a surface whose boundary is a specific curve, but with the least possible area. •
Optimal control theory is a generalization of the calculus of variations which introduces control policies. •
Dynamic programming is the approach to solve the
stochastic optimization problem with stochastic, randomness, and unknown model parameters. It studies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that describes the relationship between these subproblems is called the
Bellman equation. •
Mathematical programming with equilibrium constraints is where the constraints include
variational inequalities or
complementarities.
Multi-objective optimization Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as the
Pareto set. The curve created plotting weight against stiffness of the best designs is known as the
Pareto frontier. A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal. The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker. Multi-objective optimization problems have been generalized further into
vector optimization problems where the (partial) ordering is no longer given by the Pareto ordering.
Multi-modal or global optimization Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer. Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm. Common approaches to
global optimization problems, where multiple local extrema may be present include
evolutionary algorithms,
Bayesian optimization and
simulated annealing. == Classification of critical points and extrema ==