More recent research finds evidence of the anchoring effect in the
criminal justice system. In 2006,
Birte Englich, Thomas Mussweiler,
and Fritz Strack conducted a study in
which judges threw a pair of dice and
then provided a prison sentence for
an individual convicted of shoplifting.
7 The researchers rigged the dice
so they would land on a low number
(low anchor) for half of the participants and a high number (high anchor) for the other half. The judges
who rolled a low number provided
an average sentence of five months,
whereas the judges who rolled a high
number provided an average sentence of eight months. The difference
in responses was statistically significant, and the anchoring index of the
dice roll was 67%. In fact, similar
studies have shown that sentencing
demands,
7 motions to dismiss,
13 and
damages caps15 also act as anchors
that bias judges’ decision-making.
Methods. This second experiment
thus sought to investigate if algorithmic risk scores influence human decisions by serving as anchors. The experiment entailed a 1 x 2 between-subjects
design where the two treatments were
as follows: low score, in which participants viewed the defendant profile accompanied by a low-risk score; and high-score, in which participants viewed the
defendant profile accompanied by a
high-risk score.
The low-score and high-score treatments assigned risk scores based on
the original COMPAS score according
to the following formulas:
Low-score = max(0,COMPAS − 3)
High-score = min( 10,COMPAS + 3)
This new experiment mirrored the
previous one: Participants evaluated
the same 40 defendants, met the same
requirements, and received the same
payment. The study also employed the
format on the Qualtrics platform.
Results. Figure 4 shows the aver-
age scores of participants assigned
to defendants versus those provided
in the defendant profiles in the low-
score and high-score treatments.
Error bars represent the 95% con-
fidence intervals. The scores that
participants assigned defendants
highly correlate with those that they
predictions may influence humans’
decisions through a subtle cognitive
bias known as the anchoring effect:
when individuals assimilate their esti-
mates to a previously considered stan-
dard. Amos Tversky and Daniel Kahn-
eman first theorized the anchoring
heuristic in 1974 in a comprehensive
paper that explains the psychologi-
cal basis of the anchoring effect and
provides evidence of the phenomenon
through numerous experiments.
19 In
one experiment, for example, partici-
pants spun a roulette wheel that was
predetermined to stop at either 10
(low anchor) or 65 (high anchor). Af-
ter spinning the wheel, participants
estimated the percentage of African
nations in the United Nations. Tver-
sky and Kahneman found that par-
ticipants who spun a 10 provided an
average guess of 25%, while those who
spun a 65 provided an average guess
of 45%. They rationalized these results
by explaining that people make esti-
mates by starting from an initial value,
and their adjustments from this quan-
tity are typically insufficient.
While initial experiments investigating the anchoring effect recruited
amateur participants,
19 researchers
also observed similar anchoring effects
among experts. In their seminal study
from 1987, Gregory Northcraft and
Margaret Neale recruited real estate
agents to visit a home, review a detailed
booklet containing information about
the property, and then assess the value
of the house.
16 The researchers listed a
low asking price in the booklet for one
group (low anchor) and a high asking
price for another group (high anchor).
The agents who viewed the high asking
price provided valuations 41% greater
than those who viewed the lower price,
and the anchoring index of the listing
price was likewise 41%. Northcraft and
Neale conducted an identical experiment among business school students
with no real estate experience and observed similar results: the students in
the high anchor treatment answered
with valuations that exceeded those in
the low anchor treatment by 48%, and
the anchoring index of the listing price
was also 48%. Their findings, therefore,
suggested that anchors such as listing
prices bias the decisions of trained
professionals and inexperienced individuals similarly.
Even if algorithms
do not officially
make decisions,
they anchor
human decisions
in serious ways.