Optimal control deals with the problem of finding a control law for a given system such that a certain
optimality criterion is achieved. A control problem includes a
cost functional that is a
function of state and control variables. An
optimal control is a set of
differential equations describing the paths of the control variables that minimize the cost function. The optimal control can be derived using
Pontryagin's maximum principle (a
necessary condition also known as Pontryagin's minimum principle or simply Pontryagin's principle), or by solving the
Hamilton–Jacobi–Bellman equation (a
sufficient condition). We begin with a simple example. Consider a car traveling in a straight line on a hilly road. The question is, how should the driver press the accelerator pedal in order to
minimize the total traveling time? In this example, the term
control law refers specifically to the way in which the driver presses the accelerator and shifts the gears. The
system consists of both the car and the road, and the
optimality criterion is the minimization of the total traveling time. Control problems usually include ancillary
constraints. For example, the amount of available fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc. A proper cost function will be a mathematical expression giving the traveling time as a function of the speed, geometrical considerations, and
initial conditions of the system.
Constraints are often interchangeable with the cost function. Another related optimal control problem may be to find the way to drive the car so as to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Yet another related control problem may be to minimize the total monetary cost of completing the trip, given assumed monetary prices for time and fuel. A more abstract framework goes as follows. Minimize the continuous-time cost functional J[\textbf{x}(\cdot), \textbf{u}(\cdot), t_0, t_f] := E\,[\textbf{x}(t_0),t_0,\textbf{x}(t_f),t_f] + \int_{t_0}^{t_f} F\,[\textbf{x}(t),\textbf{u}(t),t] \,\mathrm dt subject to the first-order dynamic constraints (the
state equation) \dot{\textbf{x}}(t) = \textbf{f}\,[\,\textbf{x}(t), \textbf{u}(t), t], the algebraic
path constraints \textbf{h}\,[\textbf{x}(t),\textbf{u}(t),t] \leq \textbf{0}, and the
endpoint conditions \textbf{e}[\textbf{x}(t_0),t_0,\textbf{x}(t_f),t_f] = 0 where \textbf{x}(t) is the
state, \textbf{u}(t) is the
control, t is the independent variable (generally speaking, time), t_0 is the initial time, and t_f is the terminal time. The terms E and F are called the
endpoint cost and the
running cost respectively. In the calculus of variations, E and F are referred to as the Mayer term and the
Lagrangian, respectively. Furthermore, it is noted that the path constraints are in general
inequality constraints and thus may not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any solution [\textbf{x}^*(t),\textbf{u}^*(t),t_0^*, t_f^*] to the optimal control problem is
locally minimizing. ==Linear quadratic control==