interaction. Such missing data is an
example of bias that could be addressed
by designing systems to be more
collaborative and conversational—
seek input from the recipients of the
suggestions and recommendations.
There are other biases in data;
Ricardo Baeza Yates laid out some of
them in a recent CACM article [ 4].
Examples include activity bias (we
tend to focus on activity, ignoring that
inactivity is also a signal), data bias
(what we decide to measure signals
another bias), and sampling bias
(how we sample from what we collect
can also involve bias). Baeza Yates’s
article offers more examples and a
worthwhile deeper dive into
the different forms of bias [ 5].
Raise awareness of human
cognition and information
presentation. HCI researchers and
designers are very aware of how
the ways in which information is
presented affects how people process
that information. It is no surprise
to us that the presentation format
of recommendations affects how
people perceive and receive the
recommendation. The complexity
of the information, the presentation
format and appearance, the recipients’
context at the time of presentation
and their ability to attend to the
information, the recipients’ expertise,
and the recipients’ goals all affect the
impact of the information.
Design reflective, interrogable,
and conversational systems.
Suggestion and recommendation
presentation need to be interrogable.
We need to move away from black-box systems toward systems that can
be interrogated and asked about data
provenance, reliability, and reasoning
rationale. There has been a clamor to
create explainable AI (XAI) systems.
Books, products, movies, music—every day, our digital lives are strewn with suggestions and recommendations that invite us to direct our attention to specific
content and spend our money on specific
products. Driving and transportation
routes are suggested to us based on past
traffic patterns and current traffic load.
Search-query autocomplete options
and autocorrect spell-checking reduce
typing effort.
Such suggestions and
recommendations are often extremely
useful.
However, there is a lot of work to do
to make such systems even more useful.
We are surely all aware how wildly off-base suggestions and recommendations
can be—on occasion, hilariously so.
Damn You Autocorrect (DYAC) [ 1] is
just one site that documents ridiculous
autocorrect suggestions.
Yesterday my favorite online-shopping site recommended a truly ugly
skull-shaped table lamp to me. A few
days ago, I was recommended a pair of
shoes that I purchased weeks ago.
Such quirks range from entertaining
to puzzling to irritating, and most
do not have serious consequences.
However, when off-base suggestions
and recommendations are
consequential, the examples are deeply
troubling. Driving directions send
drivers into lakes and onto not-yet-
complete bridges. Apple’s autocorrect
was reported to be completing
medication names incorrectly,
substituting medications with
completely different ones, resulting in
the recommendation of life-threatening
dosages [ 2]. “Smart” policing and
technologies that use racial profiling
can proffer suggestions to authorities
that result in high-consequence errors
in perpetrator identification.
These are all what Robert Merton,
the 1930s sociologist, dubbed
unintended consequences [ 3].
I believe we can, and should, design
better systems, using design approaches
that focus on reducing the likelihood
of such unintended consequences.
The driving example is both a data
failure and a lack of understanding of
drivers’ attentional state—people are
often distracted. The second example
showcases that a lack of expertise may
lead users to be unable to recognize
correct and incorrect suggestions—
recognizing and/or disambiguating
complicated, unfamiliar drug names
is hard. The last example reminds us
that suggestions and recommendations
operate in a milieu of existing social and
cultural biases. All the examples point
to the fact that we, as users, are prey
to overinvestment in the “smarts” of
suggestion/recommendation systems,
over-trusting their suggestions while
neglecting to bring sufficient critical
attention to the fallibility of our
human reasoning. They also point
to the fact that the designers of the
systems did not pay enough attention
to the fundamentals of human
information processing and decision
making “in the wild.”
Where can HCI and design thinking
help? Here are some concrete areas
where we can collaborate with the
architects and developers of suggestion/
recommendation systems to prevent
such errors:
Raise awareness of biases in
the data that underlies suggestions
and recommendations. We get
recommendations for things we
already own because there is a large
gap in the system’s knowledge of
our habits beyond specific areas of
Designing Recommendations
@INTERACTIONSMAG 24 INTERACTIONS JANUARY–FEBRUARY2019
Elizabeth F. Churchill,
Google
COLUMN Ps AND Qs