Th e Evolution of Decision Th eory 175
general human predispositions. Predictable biases empirically demonstrated by
their work include “anchoring,” the “availability bias,” and “representativeness.”
Anchoring simply means that past decisions disproportionately aff ect future
decisions. Rather than approaching each problem as a blank slate, decisionmak-
ers tend to evaluate new conditions in the context of past decisions. Anchoring
has been shown to aff ect the baseline level for comparative judgments among
even the brightest of college students (Ariely 2009; Kahneman 2011). Related to
this tendency is the availability bias, in which people will assess the pros and cons
of any decision on the basis of the most readily available information, oft en recent
experiences, particularly if such experiences were highly salient or traumatic. Th e
representativeness bias simply states that individuals have a tendency to draw
on existing stereotypes when attempting to discern patterns in others’ behavior.
Other prominent biases have also been well documented, including the status quo
bias and what researchers have described as “loss aversion” (Kahneman, Knetsch,
and Th aler 1991). In situations involving uncertainty, individuals will take fewer
risks if the gains from the decision are perceived as being less than a potential
loss. And, vice versa, the potential gains from any decision must be more than
off set (oft en at least double) the potential loss; in short, the ratio of gains to losses
is not a 1:1 relationship as would be predicted by a model of pure rationality.
Kahneman and his colleagues have labeled these tendencies “anomalies,” that is,
persistent and predictable deviations from rational decisionmaking. Since the
groundbreaking work of Kahneman, Tversky, and others, there has been an eff ort
to use anomalies to build a broader theoretical framework, a point we return to
later in the chapter and in Chapter 8.
Tversky and Kahneman spawned a new generation of researchers under the
moniker of “heuristics and biases” (Th aler and Sunstein 2009, 23). In conditions
of incomplete information, decisionmakers tend to demonstrate any or all of
the heuristics or biases just noted. Such tendencies have been well documented
since Tversky and Kahneman’s original work, with scholars focusing on biases
in information processing, developing the appropriate methodologies for test-
ing such biases (mostly experimental), and devising a new theoretical frame-
work for explaining such tendencies. Kahneman (2011) would later adopt the
terms “System 1” and “System 2” to describe the two ways in which decisions are
made. System 1 “operates automatically and quickly,” whereas System 2 is more
deliberate, directing “attention to the eff ortful mental activities that demand it”
(20–21). Key to this framework is the notion that System 2 must be forced into
action, otherwise decisions will be made using System 1, which is prone to the
aforementioned biases.
Reviews of this emerging research agenda can be found in behavioral eco-
nomics (Camerer, Lowenstein, and Rabin 2004), experimental economics (Kagel
and Roth 1995), and political science (Ostrom, Gardner, and Walker 1994), as
well as more general audience introductions (Ariely 2009; Brafman and Brafman
2008; Kahneman 2011). Given the rich tradition in public administration on