An
objective function (or
goal function) specifies the goals of an intelligent agent. An agent is deemed more intelligent if it consistently selects actions that yield outcomes better aligned with its objective function. In effect, the objective function serves as a measure of success. The objective function may be: •
Simple: For example, in a game of
Go, the objective function might assign a value of 1 for a win and 0 for a loss. •
Complex: It might require the agent to evaluate and learn from past actions, adapting its behavior based on patterns that have proven effective. The objective function encapsulates
all of the goals the agent is designed to achieve. For rational agents, it also incorporates the trade-offs between potentially conflicting goals. For instance, a self-driving car's objective function might balance factors such as safety, speed, and passenger comfort. Different terms are used to describe this concept, depending on the context. These include: •
Utility function: Often used in economics and decision theory, representing the desirability of a state. •
Objective function: A general term used in optimization. •
Loss function: Typically used in machine learning, where the goal is to
minimize the loss (error). •
Reward Function: Used in
reinforcement learning. •
Fitness Function: Used in
evolutionary systems. Goals, and therefore the objective function, can be: •
Explicitly defined: Programmed directly into the agent. •
Induced: Learned or evolved over time. • In
reinforcement learning, a "reward function" provides feedback, encouraging desired behaviors and discouraging undesirable ones. The agent learns to maximize its cumulative reward. • In
evolutionary systems, a "fitness function" determines which agents are more likely to reproduce. This is analogous to natural selection, where organisms evolve to maximize their chances of survival and reproduction. Some AI systems, such as
nearest-neighbor, reason by analogy rather than being explicitly goal-driven. However, even these systems can have goals implicitly defined within their training data. Such systems can still be
benchmarked by framing the non-goal system as one whose "goal" is to accomplish its narrow classification task. Systems not traditionally considered agents, like
knowledge-representation systems, are sometimes included in the
paradigm by
framing them as agents with a goal of, for example, answering questions accurately. Here, the concept of an "action" is extended to encompass the "act" of providing an answer. As a further extension, mimicry-driven systems can be framed as agents optimizing a "goal function" based on how closely the IA mimics the desired behavior. While
symbolic AI systems often use an explicit goal function, the paradigm also applies to
neural networks and
evolutionary computing.
Reinforcement learning can generate intelligent agents that appear to act in ways intended to maximize a "reward function". Sometimes, instead of setting the reward function directly equal to the desired benchmark evaluation function, machine learning programmers use reward shaping to initially give the machine rewards for incremental progress.
Yann LeCun stated in 2018, "Most of the learning algorithms that people have come up with essentially consist of minimizing some objective function."
AlphaZero chess had a simple objective function: +1 point for each win, and -1 point for each loss. A self-driving car's objective function would be more complex. Evolutionary computing can evolve intelligent agents that appear to act in ways intended to maximize a "fitness function" influencing how many descendants each agent is allowed to leave. However, AIXI is
uncomputable. In the real world, an IA is constrained by finite time and hardware resources, and scientists compete to produce algorithms that achieve progressively higher scores on benchmark tests with existing hardware. == Agent function ==