a person to comfort them or move an
object or make a meal. Again, in this domain, research is revealing the benefits
of robots that express affect appropriate to each situation, such as asking for
something politely or apologizing after
making a mistake. Researchers have
found that robots showing human-like
expressions and positive politeness
are more able to get humans to assist
them and that robots that show sorrow
or sadness after making a mistake are
viewed as more intimate, especially if
the users thought the robots were acting autonomously. 17 Hammer et al. 18 report on several studies that look at the
acceptability of social robots by older
adults. They found that attributes like
appearance, intellectuality, friendliness, and kind-heartedness are important for acceptability. In addition, robot
companions may be viewed more positively if they emulate situationally appropriate social behavior.
Another well-known study also
looked at users’ reactions to interac-
tions with a robot after good or mis-
taken task performance and whether or
not the robot responded emotionally. 17
These researchers were interested in
the question surrounding unexpected
behaviors from robots during collab-
orative tasks, which are extremely likely
to occur. There is currently very little re-
search on the topic. These researchers
thought an affective interaction might
be more useful and trust-enabling than
a more efficient, less human-like inter-
action. What they found was a human-
oid robot that expresses emotions, for
instance apologizing via speech and
nonverbal gestures, is much preferred
over one without these skills, despite
taking more time on the task and mak-
ing errors. They also found the robot
that exhibited more human-like, emo-
tional signals might make humans
more likely to feel empathetic toward
the robot, and not want to hurt its feel-
ings. Most importantly, the humans
trusted these robots more because of
their increased transparency and feed-
back in communication and emotional
expression. These findings suggest that
robots that express human-like, polite,
emotional signals can significantly mit-
igate dissatisfaction when errors or oth-
er problems occur during human-agent
interaction. These findings could also
result in good design guidelines for de-
signers of human-robot or other kinds
of human-agent conversational sys-
tems. These systems will always suffer
from imperfect reliability and a supe-
rior design principle involves exposing
transparency about the outcome and
involving the human in the reparation.
As the authors point out, however, juxta-
posing reliability with expressiveness is
challenging and the design of an error-
free system is unlikely in the near term.
And of course, there is concern
about the uncanny valley, as it has
been shown that if robots look too
human-like, but do not match social
expectations in terms of behavior, then
people do not like and might distrust
these systems even more. Anti-robot
sentiment, in addition, could be a real
concern. People may feel threatened by
the proliferation of robots and the appearance that robots will not care for
humans, act morally or ethically.
Future Affective Systems
The deployment of intelligent agents
is widespread on mobile devices and
desktops. However, most agents that
have been designed with some emo-
tion sentience have been limited to
constrained experimental settings.
While “cognitive” agents can often
perform effectively with NLP alone,
emotionally sentient agents require
multimodal sensing capabilities and
the ability to express emotion in more
complex ways, which has been very
challenging to achieve in real-world
settings. However, given the review
here, it is likely the next frontier on
which these assistants/agents compete
with one another will be their ability to
emotionally connect with their users.
Social robotics that have basic facial
expression recognition (for example,
Pepper, Softbank Inc.) are now on the
market. These devices are likely to elic-
it a richer set of emotions than the typi-
cal interaction with a cognitive agent
designed for information retrieval. As
such, they present the exciting poten-
tial for large-scale, in-situ experimen-
tation and user experience testing.
Large-scale collection and analysis of
affective data is important for improv-
ing affective computing systems, and
deployment of systems in everyday
contexts is one way to achieve this, with
the obvious caveats raised earlier.
Robots can express rich emotion, in
These systems will
always suffer from
imperfect reliability
and a superior
design principle
involves exposing
transparency about
the outcome and
involving the human
in the reparation.