It has been 20 years since Rosalind
Picard published her seminal book
on the subject of affective computing. 28 As with other areas of artificial
intelligence (AI), however, progress toward her vision has ebbed and flowed.
Smaller electronics have transformed
wearable computing, enabling signals
to be captured and analyzed on comfortable wrist-worn devices. Many consum-er-grade smart watches now contain
miniaturized physiological sensors that
could be used for affect detection. Machine learning, including deep learning,
has significantly improved computer-based speech and visual understanding algorithms, such as speech-to-text,
facial expression recognition, and
scene understanding.
As is the case with other forms of
computer technology, there is danger of
overhyping the capabilities of affective
computing systems. Many of the compelling applications of affective computing have yet to be realized, in part
because designing emotionally sentient
systems is much more complex than
simply sensing affective signals. Understanding and adapting to emotional
cues is highly context dependent and
relies on tacit knowledge. Compounding this, large, interpersonal variability
exists in how people express emotions.
Humans also have diverse preferences
for how an agent responds to them. Personalization is very important to enable
more compelling systems. The most
successful affective agent is likely one
that can learn about a person’s nuanced
expressions and responses and adapt to
different situations and contexts.
To do all of this, we must develop
models of emotion that are amenable
to computation. This is challenging, as
emotions are difficult define, and the
relationship between observed signals
and states often requires a many-to-many mapping. Furthermore, human
knowledge of emotion is predominantly implicit, defined by unwritten,
learned social rules. These rules are
also culturally dependent13 and not
universal. Scientists have proposed numerous models of emotion, each with
their own strengths and weaknesses.
Nevertheless, the choice of defining
emotions has significant implications
for the design of a sentient system.
In this article, we describe the nu-
merous benefits that emotion-aware
paper forms filled out before a doctor
or therapy visit. The problem is that
memory limits render these methods
less effective over extended periods of
time and are associated with demand
effects (changes in behavior resulting
from cues as to what constitutes appro-
priate behavior.) Computer programs
can now track consumer and patient
health, allowing for mining of that
data for ideal intervention timing and
personal reflection by the individual
user of what makes them feel positive
or not. 24 Recent efforts have success-
fully used conversational agents to
automate the assessment and evalua-
tion of psychology treatments. 25 Con-
versational agents could help with
social support, wellness counseling,
task completion, and safety, if they are
designed with the ability to sense and
manage affect and social interaction.
This promising new direction could,
for example, stave off rampant prob-
lems of loneliness in the elderly. 31
Researchers have argued the relationship between a tutor and a learner
plays an important role in improving
educational results. 39 New educational platforms (for example, EdX and
Coursera) are asynchronous and distributed. Automated tutoring systems
designed with the ability to understand students’ affective responses are
very promising. 11 There is also growing
literature on using affective agents in
training simulations, (for example, by
the military), to improve realism, evoke
empathy, and even stir fear. 15 These
simulations are critical for preparing
soldiers, medical staff, and other personnel for the realities of combat zones
and environmental catastrophes.
Affective computing brings new-found realism and immersion to entertainment applications, such as games,
interactive media exhibits, and shows.
In fact, companies have recently tracked
their audience’s affective response as it
was presented with variants of commercials and other kinds of entertainment
during sporting events (for example,
Affectiva, Inc. and Emotient, Inc.). This
practice is becoming increasingly common in the areas of marketing and
advertising to drive decision-making
about marketing content (for example,
what content works best, when and
where to air advertisements).
Beyond these examples, emotionally
intelligent systems are likely to impact
retail, transportation, communica-
tions, governance, and policing. Com-
puters are likely to replace human ser-
vice professionals in many settings and
emotion will play a role in these inter-
actions. This wealth of examples illus-
trates the impact this technology might
have on society. Careful design is there-
fore critical. Many people currently say
they would not trust a machine with
important decision-making (that is,
money or health management), even
when given evidence that machines can
perform many tasks, such as data col-
lection, numerical analysis, and plan-
ning more effectively than humans.a
This further reinforces the need for re-
search around systems that engender
trust and personalized, emotional in-
telligence, so they might be considered
more trustworthy, empathetic, socially
appropriate, and persuasive. Howev-
er, it will not always be appropriate to
make affective systems. For instance, a
PA could be considered valued if it per-
forms essential functions, regardless of
how natural it is to interact with. Con-
sider human air traffic controllers and
the highly analytical and symbolic way
that they interact with airline pilots, as
one example. Therefore, it is important
to consider when it is appropriate to
make technology emotion-aware.
As the basis of our position, we
turn to a recent article written by Byron Reeves29 about interactive, online
characters that might have several
advantages over alternative system instantiations. Reeves claims that since
the interactions humans have with
media are fundamentally social, it is
important for embodied agents to employ social intelligence to be successful. He makes the point that socially
intelligent interfaces increase memory
and learning and explicitly ground
the social interaction. He argues that
people implicitly react to these online
characters (agents) as social actors.
The agents could also increase trust in
their interactions, which could be ever
more important moving forward, as we
incorporate the human-appropriate
design aspects.
a https://hbr.org/2017/02/the-rise-of-ai-
makes-emotional-intelligence-more-important?utm_campaign=hbr&utm_
source=linkedin&utm_medium=social