when a mistake is detected. In a week-long study, we found that people were
generally delighted when the computer
accurately reflected their mood and
quite forgiving when it did not. 24 However, for a commercial system that will
be used for more than two weeks, a user’s patience could be tested by a system
which regularly makes mistakes and
cannot be corrected or learn online.
Designing systems that can measure, often passively, and log affective
signals presents ethical challenges. As
with any technology, there is the possibility that it will be abused. Much of
the hardware used for sensing affective
signals is small and ubiquitous (for
example, microphones or webcams).
Even measurement of physiological
signals can be performed using these
devices and does not require contact
with the body. Thus, people may not be
aware that an agent is measuring and
responding to their emotional state.
As described in Becker et al., 2 our
ability to render emotional expressiveness in agents is extremely limited today, though this is improving quickly.
Still, it should be cautioned that embodied agents and robots will never
experience the physiological reactions
nor the actual emotions that they project (for example, a racing heart or relaxation). The question then becomes one
of how humans react to this limited
display of emotionality and our obvious understanding that these agents
are not human. Much more experimentation must be done to identify the
uncanny valley and find design sweet
spots, where more natural expression
abilities and ease of use don’t cross-over into negative experiences.
There is a danger that a person could
be manipulated by agents that can in-
terpret their emotional state. People
tend to trust agents that appear more
attractive, for example, even when they
are not reliable. 38 Deception of this
kind must be avoided. If we are to be
interacting with computer agents more
and more, there is a likelihood that we
will change our behavior to mimic that
of the system, much as humans do. 26
Other evidence supports this idea, such
as data showing that people are chang-
ing how they think as a result of using
Internet search engines. Specifically,
children, who have extensive interac-
tions with an agent that cannot accu-
addition to having customized hard-
ware for sensing affective signals. Leon-
ardo5 is an example of a robot with a face
capable of near human-level expression.
Commercially available robots, such as
Cozmo (by Anki, Inc.), have engines for
expressing limited physical emotional
behaviors. However, robotics such as
these are unlikely to be ubiquitous in
the near-term. The most common emo-
tional agents are still likely to be virtual.
These agents need not have human ap-
pearance; abstracted representations
of characters can still communicate
significant amounts of emotional in-
formation. We can return to perhaps
the most famous robot of all—R2-D2—
that was scripted to successfully con-
vey many emotions through colors and
sounds. Agents such as Cortana could
use similar abstractions to both convey
emotions and elicit emotion from their
users; physical motion is not a prerequi-
site for complex emotional expression.
It is also important for designers
to understand that learning purely
from human-human behavior may
not always be the most effective approach. 35 Considering how to present
and sense information is important
when a user is trying to complete
tasks that already require considerable cognitive processing.
Embodied social agents can help
express and regulate emotion, which is
important in every social interaction. We
know that emotional intelligence is a
key factor in intellect and can strongly
influence behavior. Per Reeves, 29 re-
search shows that negative experiences
with technology are much more strong-
ly remembered and actionable than are
positive ones, so automated systems
need to consider negative interactions
in design, as ignoring these negative
incidents could lead to the same bad
feelings, or worse, rejection of an auto-
mated system. Embodied social agents
are a preferred way to deal with these
kinds of experiences. Facial expres-
sions, for example, can signal what re-
sponses are appropriate, or when more
information is needed. This can be
much faster than just using words or
text alone. Likewise, intelligent social
agents can be used to display important
social and cultural manners, whose in-
fluence should not be ignored in design
as well. Reeves’ overall point, much like
that of Cassell’s, 3, 9 is that embodied,
social agents that respect human-to-
human interaction protocols, simply
can make user interfaces easier to use, if
designed appropriately.
In the near-term machines are unlikely to understand all of the complex
social norms that humans typically follow or detect the emotional states of
people with high precision and recall.
Therefore, agents will, on occasion, exhibit socially inappropriate behavior.
Ideally, an intelligent system should be
designed so that it can learn from these
mistakes, or at the very least apologize