Simpler problem One way of achieving the
computational performance gain expected of a heuristic consists of solving a simpler problem whose solution is also a solution to the initial problem.
Travelling salesman problem An example of approximation is described by
Jon Bentley for solving the
travelling salesman problem (TSP): • "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" so as to select the order to draw using a
pen plotter. TSP is known to be
NP-hard so an optimal solution for even a moderate size problem is difficult to solve. Instead, the
greedy algorithm can be used to give a good but not optimal solution (it is an approximation to the optimal answer) in a reasonably short amount of time. The greedy algorithm heuristic says to pick whatever is currently the best next step regardless of whether that prevents (or even makes impossible) good steps later. It is a heuristic in the sense that practice indicates it is a good enough solution, while theory indicates that there are better solutions (and even indicates how much better, in some cases).
Search Another example of heuristic making an algorithm faster occurs in certain search problems. Initially, the heuristic tries every possibility at each step, like the full-space search algorithm. But it can stop the search at any time if the current possibility is already worse than the best solution already found. In such search problems, a heuristic can be used to try good choices first so that bad paths can be eliminated early (see
alpha–beta pruning). In the case of
best-first search algorithms, such as
A* search, the heuristic improves the algorithm's convergence while maintaining its correctness as long as the heuristic is
admissible.
Newell and Simon: heuristic search hypothesis In their
Turing Award acceptance speech,
Allen Newell and
Herbert A. Simon discuss the heuristic search hypothesis: a physical symbol system will repeatedly generate and modify known symbol structures until the created structure matches the solution structure. Each following step depends upon the step before it, thus the heuristic search learns what avenues to pursue and which ones to disregard by measuring how close the current step is to the solution. Therefore, some possibilities will never be generated as they are measured to be less likely to complete the solution. A heuristic method can accomplish its task by using search trees. However, instead of generating all possible solution branches, a heuristic selects branches more likely to produce outcomes than other branches. It is selective at each decision point, picking branches that are more likely to produce solutions.
Common Heuristic Algorithms in AI Heuristics are central to many informed search algorithms and optimization techniques for AI: •
A* Search Algorithm The A* search algorithm is one of the most popular heuristic search techniques due to its ability to find optimal solutions efficiently. A* combines both the
actual path cost and the
heuristic estimate of the remaining cost to reach the goal. •
Greedy Best-First Search: This algorithm expands the node that is closest to the goal based solely on the heuristic function (h(n)), without considering the path cost so far. It is fast but does not guarantee an optimal solution. •
Hill Climbing: A local search algorithm that iteratively moves from the current state to a better neighboring state. It is simple to implement but can get stuck in local optima (suboptimal solutions that are better than their immediate neighbors but not the overall best). An objective function (like a gradient for continuous spaces) to determine direction. •
Simulated Annealing: Simulated Annealing is a heuristic search technique that explores the search space by occasionally accepting worse solutions to avoid getting stuck in local maxima. Inspired by the annealing process in metallurgy, this algorithm gradually reduces the probability of accepting worse solutions as the search progresses. By allowing exploration of suboptimal solutions, simulated annealing can escape local maxima and find a better overall solution. It is commonly used in optimization problems where the search space is large and complex. •
Genetic Algorithms: These are inspired by natural selection, using processes like selection, crossover, and mutation to evolve a population of candidate solutions over generations. •
Ant Colony Optimization: A swarm intelligence method inspired by the way ants find paths to food sources, using artificial "pheromones" to guide the search.
Antivirus software Antivirus software often uses heuristic rules for detecting viruses and other forms of
malware. Heuristic scanning looks for code and/or behavioral patterns common to a class or family of viruses, with different sets of rules for different viruses. If a file or executing process is found to contain matching code patterns and/or to be performing that set of activities, then the scanner infers that the file is infected. The most advanced part of behavior-based heuristic scanning is that it can work against highly randomized self-modifying/mutating (
polymorphic) viruses that cannot be easily detected by simpler string scanning methods. Heuristic scanning has the potential to detect future viruses without requiring the virus to be first detected somewhere else, submitted to the virus scanner developer, analyzed, and a detection update for the scanner provided to the scanner's users. == Pitfalls ==