In their initial research, Tversky and Kahneman proposed three heuristics—availability, representativeness, and anchoring and adjustment. Subsequent work has identified many more. Heuristics that underlie judgment are called "judgment heuristics". Another type, called "evaluation heuristics", are used to judge the desirability of possible choices.
List of informal models of heuristics: •
Affect heuristic: A mental shortcut which uses emotion to influence the decision. Emotion is the effect that plays the lead role that makes the decision or solves the problem quickly or efficiently. It is used while judging the risks and benefits of something, depending on the positive or negative feelings that people associate with a stimulus. It can also be considered the gut decision since if the gut feeling is right, then the benefits are high and the risks are low. •
Anchoring and adjustment: Describes the common human tendency to rely more heavily on the first piece of information offered (the "anchor") when making decisions. For example, in a study done with children, the children were told to estimate the number of jellybeans in a jar. Groups of children were given either a high or low "base" number (anchor). Children estimated the number of jellybeans to be closer to the anchor number that they were given. •
Availability heuristic: A mental shortcut that occurs when people make judgements about the probability of events by the ease with which examples come to mind. For example, in a 1973 Tversky & Kahneman experiment, the majority of participants reported that there were more words in the English language that start with the letter K than for which K was the third letter. There are actually twice as many words in the English Language that have K as the third letter as those that start with K, but words that start with K are much easier to recall and bring to mind. • Balance heuristic: Applies to when an individual balances the negative and positive effects from a decision which makes the choice obvious. It is a mental shortcut that helps individuals achieve peace and harmony in their lives while simultaneously attempting to avoid the potential risks or consequences of a decision. • Base rate heuristic: When a decision involves probability this is a mental shortcut that uses relevant data to determine the probability of an outcome occurring. When using this Heuristic there is a common issue where individuals misjudge the likelihood of a situation. For example, if there is a test for a disease which has an accuracy of 90%, people may think it's a 90% they have the disease even though the disease only affects 1 in 500 people. • Common sense heuristic: Used frequently by individuals when the potential outcomes of a decision appear obvious. For example, when your television remote stops working, you would probably change the batteries. • Default heuristic: In real world models, it is common for consumers to apply this heuristic when selecting the default option regardless of whether the option was their preference. • Educated guess heuristic: When an individual responds to a decision using relevant information they have stored relating to the problem. •
Effort heuristic: The worth of an object is determined by the amount of effort put into the production of the object. Objects that took longer to produce are more valuable while the objects that took less time are deemed not as valuable. Also applies to how much effort is put into achieving the product. This can be seen as the difference of working and earning the object versus finding the object on the side of the street. It can be the same object but the one found will not be deemed as valuable as the one that we earned. •
Escalation of commitment: Describes the phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the cost, starting today, of continuing the decision outweighs the expected benefit. This is related to the
sunk cost fallacy. • Fairness heuristic: Applies to the reaction of an individual to a decision from an authoritative figure. If the decision is enacted in a fair manner the likelihood of the individual to comply voluntarily is higher than if it is unfair. •
Familiarity heuristic: A mental shortcut applied to various situations in which individuals assume that the circumstances underlying the past behavior still hold true for the present situation and that the past behavior thus can be correctly applied to the new situation. Especially prevalent when the individual experiences a high
cognitive load. •
Naïve diversification: When asked to make several choices at once, people tend to diversify more than when making the same type of decision sequentially. •
Peak–end rule: A person's subjective perceptions during the most intense and final moments of an event are averaged together into a single judgment. For example, a person might judge the difficulty of a workout by taking into consideration only the most demanding part of the workout (e.g., Tabata sprints) and what happens at the very end (e.g., a cool-down). In this way, a difficult workout such as the one described here could be perceived as "easier" than a more relaxed workout that did not vary in intensity (e.g., 45 minutes of cycling in aerobic zone 3, without cool-down). •
Representativeness heuristic: A mental shortcut used when making judgements about the probability of an event, or the characteristics of an individual, under uncertainty based on a perceived resemblance between the prospects of the object and a typical case or category. In other words, this heuristic refers to the tendency to evaluate something based on how similar it is to a prototype or a stereotype that already exists in the mind of the perceiver. It often involves overlooking statistical probabilities or other relevant information, making assumptions based on matching attributes between the specific object and a general category instead. For example, in a 1982 Tversky and Kahneman experiment, participants were given a description of a woman named Linda. Based on the description, it was likely that Linda was a feminist. Eighty to ninety percent of participants, choosing from two options, chose that it was more likely for Linda to be a feminist
and a bank teller than only a bank teller. The likelihood of two events cannot be greater than that of either of the two events individually. For this reason, the representativeness heuristic is exemplary of the
conjunction fallacy. The perception of scarcity influences judgments about quality, utility, and desirability, whereby equating rarity with value can cause systematic errors or a
cognitive bias in how we assess objects, events, or opportunities. •
Simulation heuristic: A simplified mental strategy in which people determine the likelihood of an event happening based on how easy it is to mentally picture the event happening. People regret the events that are easier to imagine over the ones that would be harder to. It is also thought that people will use this heuristic to predict the likelihood of another's behavior happening. This shows that people are constantly simulating everything around them in order to be able to predict the likelihood of events around them. It is believed that people do this by mentally undoing events that they have experienced and then running mental simulations of the events with the corresponding input values of the altered model. •
Social proof: Also known as the informational social influence which was named by
Robert Cialdini in his 1984 book
Influence. It is where people copy the actions of others. It is more prominent when people are uncertain how to behave, especially in ambiguous social situations. • Working backward heuristic: When an individual assumes they have already solved a problem they work backwards in order to find how to achieve the solution they originally figured out. When an infrequent event can be brought easily and vividly to mind, this heuristic overestimates its likelihood. For example, people overestimate their likelihood of dying in a dramatic event such as a
tornado or
terrorism. Dramatic, violent deaths are usually more highly publicised and therefore have a higher availability. On the other hand, common but mundane events are hard to bring to mind, so their likelihoods tend to be underestimated. These include deaths from
suicides,
strokes, and
diabetes. This heuristic is one of the reasons why people are more easily swayed by a single, vivid story than by a large body of statistical evidence. The effect of imagination on subjective likelihood has been replicated by several other researchers. A concept's availability can be affected by how recently and how frequently it has been brought to mind. In one study, subjects were given partial sentences to complete. The words were selected to activate the concept either of hostility or of kindness: a process known as
priming. They then had to interpret the behavior of a man described in a short, ambiguous story. Their interpretation was biased towards the emotion they had been primed with: the more priming, the greater the effect. A greater interval between the initial task and the judgment decreased the effect. Tversky and Kahneman offered the availability heuristic as an explanation for
illusory correlations in which people wrongly judge two events to be associated with each other. They explained that people judge correlation on the basis of the ease of imagining or recalling the two events together. While it is effective for some problems, this heuristic involves attending to the particular characteristics of the individual, ignoring how common those categories are in the population (called the
base rates). Thus, people can overestimate the likelihood that something has a very rare property, or underestimate the likelihood of a very common property. This is called the
base rate fallacy. Representativeness explains this and several other ways in which human judgments break the laws of probability. The representativeness heuristic is also an explanation of how people judge cause and effect: when they make these judgements on the basis of similarity, they are also said to be using the representativeness heuristic. This can lead to a bias, incorrectly finding causal relationships between things that resemble one another and missing them when the cause and effect are very different. Examples of this include both the belief that "emotionally relevant events ought to have emotionally relevant causes", and magical
associative thinking.
Representativeness of base rates A 1973 experiment used a psychological profile of Tom W., a fictional graduate student. One group of subjects had to rate Tom's similarity to a typical student in each of nine academic areas (including Law, Engineering and Library Science). Another group had to rate how likely it is that Tom specialised in each area. If these ratings of likelihood are governed by probability, then they should resemble the
base rates, i.e. the proportion of students in each of the nine areas (which had been separately estimated by a third group). If people based their judgments on probability, they would say that Tom is more likely to study Humanities than Library Science, because there are many more Humanities students, and the additional information in the profile is vague and unreliable. Instead, the ratings of likelihood matched the ratings of similarity almost perfectly, both in this study and a similar one where subjects judged the likelihood of a fictional woman taking different careers. This suggests that rather than estimating probability using base rates, subjects had substituted the more accessible attribute of similarity. Without success, Tversky and Kahneman used what they described as "a series of increasingly desperate manipulations" to get their subjects to recognise the logical error. In one variation, subjects had to choose between a logical explanation of why "Linda is a bank teller" is more likely, and a deliberately illogical
argument which said that "Linda is a feminist bank teller" is more likely "because she resembles an active feminist more than she resembles a bank teller". Sixty-five percent of subjects found the illogical argument more convincing. Other researchers also carried out variations of this study, exploring the possibility that people had misunderstood the question. They did not eliminate the error. It has been shown that individuals with high
CRT scores are significantly less likely to be subject to the conjunction fallacy. The error disappears when the question is posed in terms of frequencies. Everyone in these versions of the study recognised that out of 100 people fitting an outline description, the conjunction statement ("She is
X and
Y") cannot apply to more people than the general statement ("She is
X").
Ignorance of sample size Tversky and Kahneman asked subjects to consider a problem about random variation. Imagining for simplicity that exactly half of the babies born in a hospital are male, the ratio will not be exactly half in every time period. On some days, more girls will be born and on others, more boys. The question was, does the likelihood of deviating from exactly half depend on whether there are many or few births per day? It is a well-established consequence of
sampling theory that proportions will vary much more day-to-day when the typical number of births per day is small. However, people's answers to the problem do not reflect this fact. They typically reply that the number of births in the hospital makes no difference to the likelihood of more than 60% male babies in one day. The explanation in terms of the heuristic is that people consider only how representative the figure of 60% is of the previously given average of 50%.
Dilution effect Richard E. Nisbett and colleagues suggest that representativeness explains the
dilution effect, in which irrelevant information weakens the effect of a
stereotype. Subjects in one study were asked whether "Paul" or "Susan" was more likely to be assertive, given no other information than their first names. They rated Paul as more assertive, apparently basing their judgment on a gender stereotype. Another group, told that Paul's and Susan's mothers each commute to work in a bank, did not show this stereotype effect; they rated Paul and Susan as equally assertive. The explanation is that the additional information about Paul and Susan made them less representative of men or women in general, and so the subjects' expectations about men and women had a weaker effect. This means unrelated and non-diagnostic information about certain issue can make relative information less powerful to the issue when people understand the phenomenon.
Misperception of randomness Representativeness explains systematic errors that people make when judging the probability of random events. For example, in a sequence of coin tosses, each of which comes up heads (H) or tails (T), people reliably tend to judge a clearly patterned sequence such as HHHTTT as less likely than a less patterned sequence such as HTHTTH. These sequences have exactly the same probability, but people tend to see the more clearly patterned sequences as less representative of randomness, and so less likely to result from a random process. Tversky and Kahneman argued that this effect underlies the
gambler's fallacy; a tendency to expect outcomes to even out over the short run, like expecting a
roulette wheel to come up black because the last several throws came up red. They emphasised that even experts in statistics were susceptible to this illusion: in a 1971 survey of professional psychologists, they found that respondents expected samples to be overly representative of the population they were drawn from. As a result, the psychologists systematically overestimated the
statistical power of their tests, and underestimated the
sample size needed for a meaningful test of their hypotheses. According to Tversky and Kahneman's original description, it involves starting from a readily available number—the "anchor"—and shifting either up or down to reach an answer that seems plausible. The anchoring effect has been demonstrated by a wide variety of experiments both in laboratories and in the real world. The effect is stronger when people have to make their judgments quickly. Other experiments asked subjects if the average temperature in
San Francisco is more or less than 558 degrees, or whether there had been more or fewer than 100,025 top ten albums by
The Beatles. These deliberately absurd anchors still affected estimates of the true numbers. Anchoring results in a particularly strong bias when estimates are stated in the form of a
confidence interval. An example is where people predict the value of a stock market index on a particular day by defining an upper and lower bound so that they are 98% confident the true value will fall in that range. A reliable finding is that people anchor their upper and lower bounds too close to their best estimate. This leads to an
overconfidence effect. One much-replicated finding is that when people are 98% certain that a number is in a particular range, they are wrong about thirty to forty percent of the time. Anchoring also causes particular difficulty when many numbers are combined into a composite judgment. Tversky and Kahneman demonstrated this by asking a group of people to rapidly estimate the product 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1. Another group had to estimate the same product in reverse order; 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8. Both groups underestimated the answer by a wide margin, but the latter group's average estimate was significantly smaller. A corresponding effect happens when people estimate the probability of multiple events happening in sequence, such as an
accumulator bet in horse racing. For this kind of judgment, anchoring on the individual probabilities results in an overestimation of the combined probability. When a stack of soup cans in a supermarket was labelled, "Limit 12 per customer", the label influenced customers to buy more cans. Anchoring and adjustment has also been shown to affect grades given to students. In one experiment, 48 teachers were given bundles of student essays, each of which had to be graded and returned. They were also given a fictional list of the students' previous grades. The mean of these grades affected the grades that teachers awarded for the essay. One study showed that anchoring affected the sentences in a fictional rape trial. In a similar mock trial, the subjects took the role of jurors in a civil case. They were either asked to award damages "in the range from $15 million to $50 million" or "in the range from $50 million to $150 million". Although the facts of the case were the same each time, jurors given the higher range decided on an award that was about three times higher. This happened even though the subjects were explicitly warned not to treat the requests as evidence.
Affect heuristic "
Affect", in this context, is a
feeling such as fear, pleasure or surprise. It is shorter in duration than a
mood, occurring rapidly and involuntarily in response to a
stimulus. While reading the words "lung cancer" might generate an affect of
dread, the words "mother's love" can create an affect of
affection and comfort. When people use affect ("gut responses") to judge benefits or risks, they are using the affect heuristic. The affect heuristic has been used to explain why messages
framed to activate emotions are more persuasive than those framed in a purely factual way.
Escalation of commitment heuristic Decision makers, whether at an organisational or national level, can come across the dilemma of whether to continue with an operation or withdraw from it. The escalation of commitment heuristic demonstrates that people often tend to lock themselves into losing courses of action in the hopes that investing more resources into an operation will turn around losses. Furthermore, escalation of commitment can be expected to occur in situations where the decision maker can claim credit for operational success, but losses and operational failure are directed and absorbed by others such as a larger entity. Cognitive determinates that can influence escalation of commitment include self-justification, problem framing, sunk costs, goal substitution, self-efficacy, accountability, and illusion of control. The general flow of events that causes implementation of the escalation of commitment heuristic are as follows: • A large amount of resources is invested into an operation and cannot be recovered (
sunk cost). • The operation performs poorly and provides the decision maker with negative feedback. • The decision maker continues to pour investment into the operation with the hope of turning it around, hence reflecting the escalation of commitment heuristic. Aside from being relevant to decision makers in firms and organisations, escalation of commitment is also applicable to decisions made by national leaders. An example of this is decisions relating to further investment in wars. In a war-based scenario, the costs are predominately borne by soldiers and taxpayers. Additionally, decision makers in war scenarios often do not have to directly or immediately bear the costs of their decisions at the same level as soldiers and taxpayers do, hence making their decision to keep investing easier. This reflects the escalation of commitment heuristic, and inevitably creates a cyclical process of reinvestment that has the potential to cause long-term issues economically, socially, and politically at both local and global scales.
Others • Control heuristic •
Contagion heuristic •
Effort heuristic •
Familiarity heuristic •
Fluency heuristic •
Gaze heuristic •
Hot-hand fallacy •
Naive diversification •
Peak–end rule •
Recognition heuristic •
Scarcity heuristic •
Similarity heuristic •
Simulation heuristic •
Social proof ==Theories==