From the late 1930s, an array of new mathematical tools, including differential calculus and differential equations,
convex sets, and
graph theory, were deployed to advance economic theory in a way similar to the new mathematical methods earlier applied to physics. The process was later described as moving from
mechanics to
axiomatics.
Differential calculus Vilfredo Pareto analyzed
microeconomics by treating decisions by economic actors as attempts to change a given allotment of goods to another, more preferred allotment. Sets of allocations could then be treated as
Pareto efficient (Pareto optimal is an equivalent term) when no exchanges could occur between actors that could make at least one individual better off without making any other individual worse off. Pareto's proof is commonly conflated with Walrassian equilibrium or informally ascribed to
Adam Smith's
Invisible hand hypothesis. Rather, Pareto's statement was the first formal assertion of what would be known as the
first fundamental theorem of welfare economics. In the landmark treatise
Foundations of Economic Analysis (1947),
Paul Samuelson identified a common paradigm and mathematical structure across multiple fields in the subject, building on previous work by
Alfred Marshall.
Foundations took mathematical concepts from physics and applied them to economic problems. This broad view (for example, comparing
Le Chatelier's principle to
tâtonnement) drives the fundamental premise of mathematical economics: systems of economic actors may be modeled and their behavior described much like any other system. This extension followed on the work of the marginalists in the previous century and extended it significantly. Samuelson approached the problems of applying individual utility maximization to aggregate groups using
comparative statics, which compares two
equilibrium states after an
exogenous change in a variable. This and other methods in the book provided the foundation for mathematical economics in the 20th century.
Linear models Restricted models of general equilibrium were formulated by
John von Neumann in 1937. Unlike earlier versions, the models of von Neumann had inequality constraints. For his model of an expanding economy, von Neumann proved the existence and uniqueness of an equilibrium using his generalization of
Brouwer's fixed point theorem. Von Neumann's model of an expanding economy considered the
matrix pencil \mathbf{A} - \lambda \mathbf{B} with nonnegative matrices \mathbf{A} and \mathbf{B} ; von Neumann sought
probability vectors \vec{p} and \vec{q} , and a positive number \lambda that would solve the
complementarity equation p^\mathrm{T} (\mathbf{A} - \lambda \mathbf{B})q = 0, along with two systems of inequalities expressing economic efficiency. In this model, the (
transposed) probability vector \vec{p} represents the prices of the goods, while the probability vector \vec{q} represents the "intensity" at which the production process would run. The unique
solution \lambda represents the
rate of growth of the economy, which equals the
interest rate. Proving the existence of a positive growth rate and proving that the growth rate equals the interest rate were remarkable achievements, even for von Neumann. Von Neumann's results have been viewed as a special case of
linear programming, where von Neumann's model uses only nonnegative matrices. The study of von Neumann's model of an expanding economy continues to interest mathematical economists with interests in computational economics.
Input-output economics In 1936, the Russian–born economist
Wassily Leontief built his model of
input-output analysis from the 'material balance' tables constructed by Soviet economists, which themselves followed earlier work by the
physiocrats. With his model, which described a system of production and demand processes, Leontief described how changes in demand in one
economic sector would influence production in another. In practice, Leontief estimated the coefficients of his simple models, to address economically interesting questions. In
production economics, "Leontief technologies" produce outputs using constant proportions of inputs, regardless of the price of inputs, reducing the value of Leontief models for understanding economies but allowing their parameters to be estimated relatively easily. In contrast, the von Neumann model of an expanding economy allows for
choice of techniques, but the coefficients must be estimated for each technology.
Mathematical optimization for
paraboloid function of (x, y) inputs In mathematics,
mathematical optimization (or optimization or mathematical programming) refers to the selection of a best element from some set of available alternatives. In the simplest case, an
optimization problem involves
maximizing or minimizing a
real function by selecting
input values of the function and computing the corresponding
values of the function. The solution process includes satisfying general
necessary and sufficient conditions for optimality. For optimization problems,
specialized notation may be used for the function and its input(s). More generally, optimization includes finding the best available
element of some function given a defined
domain and may use a variety of different
computational optimization techniques. Economics is closely enough linked to optimization by
agents in an
economy that an influential definition relatedly describes economics
qua science as the "study of human behavior as a relationship between ends and
scarce means" with alternative uses. Optimization problems run through modern economics, many with explicit economic or technical constraints. In microeconomics, the
utility maximization problem and its
dual problem, the
expenditure minimization problem for a given level of utility, are economic optimization problems. Theory posits that
consumers maximize their
utility, subject to their
budget constraints and that
firms maximize their
profits, subject to their
production functions,
input costs, and market
demand.
Economic equilibrium is studied in optimization theory as a key ingredient of economic theorems that, in principle, could be tested against empirical data. Newer developments have occurred in
dynamic programming and modeling optimization with
risk and
uncertainty, including applications to
portfolio theory, the
economics of information, and
search theory. and in the
Arrow–Debreu model of
general equilibrium (also discussed
below). More concretely, many problems are amenable to
analytical (formulaic) solution. Many others may be sufficiently complex to require
numerical methods of solution, aided by software. Linear and nonlinear programming have profoundly affected microeconomics, which had previously been considered only in terms of equality constraints. Many of the mathematical economists who received Nobel Prizes in Economics had conducted notable research using linear programming:
Leonid Kantorovich,
Leonid Hurwicz,
Tjalling Koopmans,
Kenneth J. Arrow,
Robert Dorfman,
Paul Samuelson and
Robert Solow.
Linear optimization Linear programming was developed to aid the allocation of resources in firms and in industries during the 1930s in Russia and during the 1940s in the United States. During the
Berlin airlift (1948), linear programming was used to plan the shipment of supplies to prevent Berlin from starving after the Soviet blockade.
Nonlinear programming Extensions to
nonlinear optimization with inequality constraints were achieved in 1951 by
Albert W. Tucker and
Harold Kuhn, who considered the nonlinear
optimization problem: :Minimize f(x) subject to g_i(x) \leq 0 and h_j(x) = 0 where :f(\cdot) is the
function to be minimized :g_i(\cdot) are the functions of the m
inequality constraints where i = 1, \dots, m :h_j(\cdot) are the functions of the l equality constraints where j = 1, \dots, l. In allowing inequality constraints, the
Kuhn–Tucker approach generalized the classic method of
Lagrange multipliers, which (until then) had allowed only equality constraints. The Kuhn–Tucker approach inspired further research on Lagrangian duality, including the treatment of inequality constraints. The duality theory of nonlinear programming is particularly satisfactory when applied to
convex minimization problems, which enjoy the
convex-analytic duality theory of
Fenchel and
Rockafellar; this convex duality is particularly strong for
polyhedral convex functions, such as those arising in
linear programming. Lagrangian duality and convex analysis are used daily in
operations research, in the scheduling of power plants, the planning of production schedules for factories, and the routing of airlines (routes, flights, planes, crews). optimal control theory was used more extensively in economics in addressing dynamic problems, especially as to
economic growth equilibrium and stability of economic systems, of which a textbook example is
optimal consumption and saving. A crucial distinction is between deterministic and stochastic control models. Other applications of optimal control theory include those in finance, inventories, and production, for example.
Functional analysis It was in the course of proving the existence of an optimal equilibrium in his 1937 model of
economic growth that
John von Neumann introduced
functional analytic methods to include
topology in economic theory, in particular,
fixed-point theory through his generalization of
Brouwer's fixed-point theorem. Following von Neumann's program,
Kenneth Arrow and
Gérard Debreu formulated abstract models of economic equilibria using
convex sets and fixed-point theory. In introducing the
Arrow–Debreu model in 1954, they proved the existence (but not the uniqueness) of an equilibrium and also proved that every Walras equilibrium is
Pareto efficient; in general, equilibria need not be unique. In their models, the ("primal") vector space represented
quantities while the
"dual" vector space represented
prices. In Russia, the mathematician
Leonid Kantorovich developed economic models in
partially ordered vector spaces, emphasizing the duality between quantities and prices. Kantorovich renamed
prices as "objectively determined valuations" which were abbreviated in Russian as "o. o. o.", alluding to the difficulty of discussing prices in the Soviet Union. Even in finite dimensions, the concepts of functional analysis have illuminated economic theory, particularly by clarifying the role of prices as
normal vectors to a
supporting hyperplane of a convex set representing production or consumption possibilities. However, problems of describing optimization over time or under uncertainty require the use of infinite–dimensional function spaces, because agents are choosing among functions or
stochastic processes.
Game theory John von Neumann, working with
Oskar Morgenstern on the
theory of games, broke new mathematical ground in 1944 by extending
functional analytic methods related to
convex sets and
topological fixed-point theory to economic analysis. Earlier
neoclassical theory had bounded only the
range of bargaining outcomes and, in special cases, for example, in a
bilateral monopoly or along the
contract curve of the
Edgeworth box. Von Neumann and Morgenstern's results were similarly weak. Following von Neumann's program, however,
John Nash used fixed–point theory to prove conditions under which the
bargaining problem and
noncooperative games can generate a unique
equilibrium solution. Noncooperative game theory has been adopted as a fundamental aspect of
experimental economics,
behavioral economics,
information economics,
industrial organization, and
political economy. It has also given rise to the subject of
mechanism design (sometimes called reverse game theory), which has private and
public-policy applications as to ways of improving
economic efficiency through incentives for information sharing. In 1994, Nash,
John Harsanyi, and
Reinhard Selten received the
Nobel Memorial Prize in Economic Sciences for their work on non–cooperative games. Harsanyi and Selten were awarded for their work on
repeated games. Later work extended their results to
computational methods of modeling.
Agent-based computational economics Agent-based computational economics (ACE), as a named field, is relatively recent, dating to about the 1990s in published work. It studies economic processes, including whole
economies, as
dynamic systems of interacting
agents over time. As such, it falls in the
paradigm of
complex adaptive systems. In corresponding
agent-based models, agents are not real people but "computational objects modeled as interacting according to rules" ... "whose micro-level interactions create emergent patterns" in space and time. The rules are formulated to predict behavior and social interactions based on incentives and information. The theoretical assumption of
mathematical optimization by agent markets is replaced by the less restrictive postulate of agents with
bounded rationality adapting to market forces. ACE models apply
numerical methods of analysis to
computer-based simulations of complex dynamic problems for which more conventional methods, such as theorem formulation, may not be readily applicable. Starting from specified initial conditions, the computational
economic system is modeled as evolving as its constituent agents repeatedly interact with each other. In these respects, ACE has been characterized as a bottom-up culture-dish approach to studying the economy. In contrast to other standard modeling methods, ACE events are driven solely by initial conditions, whether or not equilibria exist or are computationally tractable. ACE modeling, however, includes agent adaptation, autonomy, and learning. It has a similarity to, and overlap with,
game theory as an agent-based method for modeling social interactions.
market structure and
industrial organization,
transaction costs,
welfare economics and
mechanism design, and
macroeconomics. The method is said to benefit from ongoing improvements in modeling techniques in
computer science and from increased computer capabilities. Issues include those common to
experimental economics in general and by comparison and to the development of a common framework for empirical validation and resolving open questions in agent-based modeling. The ultimate scientific objective of the method has been described as "test[ing] theoretical findings against real-world data in ways that permit empirically supported theories to cumulate over time, with each researcher's work building appropriately on the work that has gone before". ==Mathematicization of economics==