Judea Pearl, a recipient of the ACM’s
Turing Award, has called for us to
change the kinds of systems we build.
His recent book with Dana Mackenzie
[ 6] expresses a need for systems that
reason like humans, systems that are
able to reason with what if structures,
moving from inexplicable correlation
to causal reasoning. We need causal
reasoning, reflective systems,
and the creation of suggestion/
recommendation rationale presented in
interpretable ways.
More mundanely, and more
readily within reach, let’s build
systems that let us know the certainty
with which a recommendation is made.
People work well with suggestions
and recommendations that have a
rationale; given the evidence for a
suggestion or recommendation, people
can decide whether the suggestion or
recommendation is really for them. By
contrast, when no argument is offered
there is little to work with, so what is
suggested or recommended is taken on
blind faith. Even offering a confidence
for the suggestion or recommendation
would be helpful.
Focus on failure—don’t assume
success. While designing and building
systems, we should consider what
is at stake, and ask ourselves: What
is the price of failure, and what
would UNDO look like? We need to
ask what the cost is of undoing the
consequences of actions taken on the
basis of algorithmic suggestions and
recommendations. It is easy to dismiss
or ignore a product recommendation,
but less easy to recover from the
trauma of being apprehended
incorrectly by authorities convinced
of your guilt, filled with certainty that
is powered by uncritically accepted
and little understood computation.
RETHINKING
RECOMMENDATIONS
To avoid an escalation in the
number of negative unintended
consequences, we need to rethink
algorithmic recommendation. We
need to think about the why, where,
and how of algorithmic suggestions
and recommendations. We need to
be more proactive in exploring the
potential for high-consequence versus
low-consequence errors. We need to
ask: How trustworthy is the information
presented? How is the information
presented—what is present and what
is missing? What is salient? What is
the expertise of the person to whom the
recommendation or filtered information
is presented? We HCI researchers and
practitioners have been grappling
with these kinds of issues for a long
time—perhaps having more influence
on the design of recommendation
systems would be a good thing.
Endnotes
1. http://www.damnyouautocorrect.com
2. https://twitter.com/slatestarcodex/
status/944739157988974592
3. Merton, R.K. The unanticipated
consequences of purposive social
action. American Sociological Review
1, 6 (1936), 895; http://www.d.umn.
edu/cla/faculty/jhamlin/4111/2111-
home/CD/TheoryClass/Readings/
MertonSocialAction.pdf
4. Baeza-Yates, R. Bias on the web.
Communications of the ACM 61, 6
(Jun. 2018), 54–61.
5. Also see the November–December 2018
issue of Interactions, which featured some
special topic articles curated by myself,
Phillip van Allen, and Mike Kuniavsky.
6. Pearl, J. and Mackenzie, D. The Book of
Why: The Ne w Science of Cause and Effect.
Basic Books, 2018.
Originally from the U.K., Elizabeth
Churchill has been leading corporate
research at top U.S. companies for the past 18
years. Her research interests include social
media, distributed collaboration, mediated
communication, and ubiquitous and embedded
computing applications.
→ churchill@acm.org
DOI: 10.1145/3292029 COP YRIGH T HELD BY AUTHOR
INTERACTIONS.ACM.ORG JANUARY–FEBRUARY 2019 INTERACTIONS 25
I
M
A
G
E
B
Y
T
U
L
R
.
/
S
H
U
T
T
E
R
S
T
O
C
K
.
C
O
M