rately mimic human emotional cues
and understanding, may end up “
imprinting” these social agents’ behaviors
and styles of interaction. Another undesirable outcome would be that children
grow-up treating agents rudely and that
these behaviors leak into human interactions. Designers should study and
consider how to minimize the chance
of these negative scenarios.
Finally, an affective agent may raise
the users’ expectations of competence
or common sense that the system may
not possess. In circumstances where
this could lead to frustration, or other
negative outcomes, it might not be appropriate to make a system respond to
affective signals.
Conclusion
While research and development of
emotionally sentient computer systems is already 50 years old, only recently have these systems been adopted
for real-world applications. Agents that
sense, interpret, and adapt to human
emotions are impacting healthcare, education, media and communications,
entertainment, and transportation.
However, there remain fundamental
questions about the design principles
that should govern such systems. From
the types of signals that are measured,
to the model of emotions that is employed, to the types of tasks they perform and the emotions they express,
there are fundamental research questions that still need to be answered.
Agents can take many forms, from
dialogue systems to physically expressive humanoid robots. While intelligent agents are widespread on mobile devices and desktops, those that
have been designed with emotional
sentience have been limited to constrained experimental settings. However, one could argue the deployment
of emotionally sentient systems is at a
tipping point. The next major advancement in development will be spurred
by large-scale and longitudinal testing
of these systems in real-world settings.
This will in part be made possible by
the increasing adoption of intelligent
assistants (for example, Apple’s Siri,
Microsoft’s Cortana, Amazon’s Alexa,
or Google Assistant) and in part by the
availability of social robots.
We have highlighted current design
challenges that are limiting adoption
of these systems, including, how to ac-
count for large interpersonal variabil-
ity, sparsity, many-to-many mappings
between behaviors and emotions, and
how to create a system that avoids so-
cial faux pas. There are ethical issues
raised by emotionally sentient systems
and this needs very serious, careful de-
sign consideration.
References
1. Bandura, A. Human agency in social cognitive theory.
American Psychologist 44, 9 (1989), 1175.
2. Becker, B. Social robots-emotional agents: Some
remarks on naturalizing man-machine interaction.
Intern. Review of Info. Ethics 6, 12 (2006), 37–45.
3. Bickmore, T. and Cassell, J. How about this weather?
Social dialogue with embodied conversational agents.
In Proceedings of the AAAI Fall Symposium on
Socially, 2000.
4. Bickmore, T. and Cassell, J. Relational agents: a
model and implementation of building user trust. In
Proceedings of 2001 SIGCHI Conf. Human Factors in
Computing Systems. ACM, 396–403.
5. Breazeal, C., Buchsbaum, D., Gray, J., and Gatenby,
D. Learning from and about others: Towards using
imitation to bootstrap the social understanding of
others by robots. Artificial Life 11, 1–2 (2005), 31–62.
6. Breazeal, C. Designing sociable machines. Socially
Intelligent Agents. Kluwer Academic Publishers,
Boston, MA, 2002, 149–156.
7. Breese, J. and Ball, G. Modeling emotional state
and personality for conversational agents. Rapport
technique MSR-TR- 98-41. Microsoft Research.
8. Cassell, J. Embodied conversational agents:
representation and intelligence in user interfaces. AI
Magazine 22, 4 (2001).
9. Cassell, J. and Thorisson, K.R. The power of a nod and
a glance: Envelope vs. emotional feedback in animated
conversational agents. Applied Artificial Intelligence
13, 4–5 (1999), 519–538.
10. D’Mello, S. and Kory, J. Consistent but modest: a
meta-analysis on unimodal and multimodal affect
detection accuracies from 30 studies. In Proceedings
of the 14th ACM Conf. Intern. Multimodal Interaction,
2012, 31–38.
11. D’Mello, S., Picard, R. W., and Graesser, A. Toward an
affect-sensitive Auto Tutor. IEEE Intelligent Systems
22, 4 (2007).
12. Drury, J.L., Scholtz, J., and Yanco, H.A. Awareness
in human-robot interactions. IEEE Intern. Conf.
Systems, Man and Cybernetics, 2003, 912–918.
13. Elfenbein, H. and Ambady, N. On the universality
and cultural specificity of emotion recognition: A
meta-analysis. Psychological Bulletin 128, 2 (2002),
203–235.
14. Flanagan, J., Huang, T., Jones, P., and Kasif, S. Human-Centered Systems: Information, Interactivity and
Intelligence. NSF Report, 1997.
15. Gratch, J. and Marsella, S. Tears and fears: Modeling
emotions and emotional behaviors in synthetic agents.
In Proceedings of the 5th Intern. Conf. Autonomous
Agents, 2001, 278–285. IEEE.
16. Gratch, J., Wang, N., Gerten, J., Fast, E., and Duffy, R.
Creating rapport with virtual agents. In Proceedings
of the Intern. Conf. Intelligent Virtual Agents, 2007,
125–138. Springer.
17. Hamacher, A. and Bianchi-Berthouze, N. Believing
in BERT: Using expressive communication to
enhance trust and counteract operational error in
physical human-robot interaction. Robot and Human
Interactive Communication, 2016, 493–500.
18. Hammer, S., Lugrin, B., Bogomolov, S., and Janowski,
K. Investigating politeness strategies and their
persuasiveness for a robotic elderly assistant.
PERSUASIVE, (2016), 315–326.
19. Krämer, N. and Bente, G. Personalizing e-learning.
The social effects of pedagogical agents. Educational
Psychology Rev. 22, 1 (2010), 71–87.
20. Lucas, G. M., Gratch, J., King, A., and Morency, L. P. It’s
only a computer: virtual humans increase willingness
to disclose. Computers in Human Behavior 37 (2014),
94–100.
21. Mori, M. The uncanny valley. Energy 7, 4 (1970), 33–35.
22. Marsella, S., Gratch, J., and Petta, P. Computational
models of emotion. Blueprint for Affective
Computing—A Sourcebook and Manual. Oxford
University Press, 2010, 21–46.
23. McDuff, D., Girard, J.M., and el Kaliouby, R. Large-scale
observational evidence of cross-ultural differences
in facial behavior. J. Nonverbal Behavior 1573 (2016),
1–19. Springer.
24. McDuff, D., Karlson, A., Kapoor, A., and Roseway,
A. AffectAura: An intelligent system for emotional
memory. In Proceedings of the SIGCHI Conf. Human
Factors in Computing Systems, 2012, 849–858. ACM.
25. Miner, A., Chow, A., Adler, S., Zaitsev, I., and Tero, P.
Conversational agents and mental health: Theory-informed assessment of language and affect. In
Proceedings of the 4th Intern. Conf. Human Agent
Interaction, 2016, 123–130. ACM.
26. Niederhoffer, K. G. and Pennebaker, J. W. Linguistic
style matching in social interaction. J. Language and
Social 21, 4 (2002), 337–360.
27. Paredes, P., Giald-Bachrach, R., Czerwinski, M., Roseway,
A., Rowan, K., and Hernandez, J. Pop Therapy: Coping
with stress through pop-culture. In Proceedings of the
8th Intern. Conf. Pervasive Computing Technologies for
Healthcare, 2014, 109–117.
28. Picard, R. W. Affective Computing. MIT Press, 1995, 1–16.
29. Reeves, B. The benefits of interactive, online
characters. The Madison Avenue J., 2010.
30. Reeves, B. and Nass, C. The Media Equation: How
people treat computers, television, and new media
like real people and places. CSLI Publications and
Cambridge University Press, 1996.
31. Ring, L., Barry, B., and Totzke, K. Addressing loneliness
and isolation in older adults: Proactive affective
agents provide better support. In Proceedings of the
2013 Humaine Assoc. Conf. Affective Computing and
Intelligent Interaction. IEEE, 61–66.
32. Rogers, W. T. The contribution of kinesic illustrators
toward the comprehension of verbal behavior within
utterances. Human Communication Research 5, 1
(1978), 54–62.
33. Shamekhi, A., Czer winski, M., Mark, G., and Novotny,
M. An exploratory study toward the preferred
conversational style for compatible virtual agents.
In Proceedings of the Intern. Conf. Intelligent Virtual
Agents. Springer, 2016, 40–50.
34. Srinivasan, V. and Takayama, L. Help me please: Robot
politeness strategies for soliciting help from humans.
In Proceedings of the ACM SIGCHI Conf. Human
Factors in Computing Systems, 2016, 494–4955.
35. Shneiderman, B. The limits of speech recognition.
Commun. ACM 43, 9 (Sept. 2000), 63–65.
36. Walters, M. L., Syrdal, D. S., and Dautenhahn, K. Avoiding
the uncanny valley: Robot appearance, personality and
consistency of behavior in an attention-seeking home
scenario for a robot companion. Autonomous Robots
24, 2 (2008), 159–178.
37. Weiser, M. The computer for the 21st century. Scientific
American 265, 3 (1991), 94–104.
38. Weizenbaum, J. ELIZA—A computer program for the
study of natural language communication between
man and machine. Commun. ACM 9, 1 (Jan. 1966),
36–45.
39. Yuksel, B.F., Collisson, P, and Czer winski, M. Brains or
beauty: How to engender trust in user-agent interactions.
ACM Trans. Internet Techn. 17, 1 (2017). ACM, 2.
40. Zhao, R., Sinha, T., Black, A. W., and Cassell, J. Socially-aware virtual agents: Automatically assessing
dyadic rapport from temporal patterns of behavior.
In Proceedings of the International Conference on
Intelligent Virtual Agent., Springer, 2016, 218–233.
Daniel McDuff ( damcduff@microsoft.com) is a researcher
at Microsoft Research, Redmond, WA, USA.
Mary Czerwinski ( marycz@microsoft.com) is a research
manager of the Visualization and Interaction (VIBE)
Research Group at Microsoft Research, Redmond, WA,
USA.
Copyright held by authors/owners.
Publication rights licensed to ACM. $15.00.
Watch the authors discuss
this work in the exclusive
Communications video.
https://cacm.acm.org/videos/
designing-emotionally-sentient-agents