factors that drive facial behavior to be
produced coherently, justifying a lower-level more biologically based modeling approach than has previously been
taken with virtual human faces. Exploring these elements together allows new
yet familiar phenomena to occur. New,
because we do not normally experience
this sort of interaction with computers,
familiar because we do with people.
Being able to simulate the underlying drivers of behavior, realistic appearance and real-time interaction
together deliver three aspects of interaction, but virtually:
Explore. Allows us to explore how
the interplay of biologically based systems can give rise to an emotionally affecting experience on a visceral, intuitively relatable human level;
Include movements. Applies an em-bodied-cognition approach to include
the subtle and unconscious movements of the face as a crucial part of
mental development and social learning; and
Understand key requirements. Gives
a basis for understanding the key requirements for more natural and adaptive HCI in which the interface has a
face.
The virtual infant BabyX is not an
end unto itself but allows researchers
to study and learn about the nature of
human response. There is a co-defined
dynamic interaction where one can adjust to BabyX no longer as a simulation
but as a personal encounter.
In summary, the enormous complexity of modeling human behavior
and dyadic interaction cannot be overestimated, but naturalistic autonomous virtual humans who embody and
process theoretical models of our behavior and reflect them back at us may
give us new insight into core aspects of
our nature and interaction with other
people—and future machines.
Acknowledgments
This work was supported in part
by the University of Auckland Vice-Chancellor’s Strategic Development
Fund, Cross Faculty Research Initiative Fund, Strategic Research Investment Fund, and Ministry of Business
Innovation and Employment “Smart
Ideas” program. We also thank Ki-eran Brennan, Stephanie Khuu, Kai
Riemer, and John Reynolds.
Natural Computation (Auckland, New Zealand).
Springer, Heidelberg, Germany, 2015, 71–88.
29. Sagar, M., Bullivant, D., Robertson, P., Efimov, O.,
Jawed, K., Kalarot, R., and Wu, T. A neuro-behavioural
framework for autonomous animation of virtual
human faces. In Proceedings of SIGGRAPH Asia
Autonomous Virtual Humans and Social Robots for
Telepresence (Shenzhen, China, Dec. 3–6). ACM Press,
New York, 2014.
30. Sagar, M. Facial performance capture and expressive
translation for King Kong. In Proceedings of ACM
SIGGRAPH 2006 Sketches (Boston, MA, July 30–Aug. 3).
ACM, Press, New York, 2006, 26.
31. Sagar, M. BabyX and the Auckland Face Simulator. In
Proceedings of ACM SIGGRAPH Computer Animation
Festival (Los Angeles, CA, Aug. 9–13). ACM Press,
New York, 2015, 183–184.
32. Scherer, K, Mortillaro, M., and Mehu, M. Understanding
the mechanisms underlying the production of facial
expression of emotion: A componential perspective.
Emotion Review 5, 1 (2013), 47–53.
33. Schröder, M. The SEMAINE API: Towards a
standards-based framework for building emotion-oriented systems. Advances in Human-Computer
Interaction (2010).
34. Sifakis, E., Neverov, I., and Fedkiw, R. Automatic
determination of facial muscle activations from sparse
motion-capture marker data. ACM Transactions on
Graphics 24, 3 (July 2005), 417–425.
35. Stone, R. and Hapkiewicz, W. The effect of realistic versus
imaginary aggressive models on children’s interpersonal
play. Child Development 42, 5 (1971), 1583–1585.
36. Terzopoulos, D. et al. Artificial fishes with autonomous
locomotion, perception, behavior, and learning in a
physical world. In Proceedings of the Artificial Life IV
Workshop, P. Maes and R. Brooks, Eds. (Cambridge,
MA, July 6–8). MIT Press, Cambridge, MA, 1994.
37. Terzopoulos, D. and Lee, Y. Behavioral animation
of faces: Parallel, distributed, and real-time facial
modeling and animation. In Proceedings of ACM
SIGGRAPH (Los Angeles, CA, Aug. 8–12). ACM Press,
New York, 2004, 119–128.
38. Trappenberg, T. Fundamentals of Computational
Neuroscience. Oxford University Press, New York, 2010.
39. Vinciarelli, A. et al. Bridging the gap between social
animal and unsocial machine: A survey of social signal
processing. IEEE Transactions on Affective Computing 3, 1
(2012), 69–87.
40. Wu, T. A Computational Framework for Modeling the
Biomechanics of Human Facial Expressions. Ph.D.
thesis, The University of Auckland, Auckland, New
Zealand, 2014.
Mark Sagar ( m.sagar@auckland.ac.nz) is an associate
professor in the Auckland Bioengineering Institute and
director of the Laboratory for Animate Technologies at the
University of Auckland, Auckland, New Zealand, and CEO/
founder of Soul Machines Ltd., Auckland, New Zealand.
Mike Seymour ( mike.seymour@sydney.edu.au) is a
lecturer in information systems at the University of
Sydney, Sydney, Australia.
Annette Henderson ( a.henderson@auckland.ac.nz)
is a developmental psychologist and senior lecturer in
the School of Psychology at the University of Auckland,
Auckland, New Zealand.
BabyX and Auckland Face Simulator research and
development contributors:
David Bullivant ( d.bullivant@auckland.ac.nz),
Paul Corballis ( p.corballis@auckland.ac.nz),
Oleg Efimov ( oefi712@auckland.ac.nz),
Khurram Jawed ( mjaw002@auckland.ac.nz),
Ratheesh Kalarot ( rkal018@auckland.ac.nz),
Paul Robertson ( prob014@auckland.ac.nz),
Werner Ollewagen ( woll627@auckland.ac.nz), and
Tim Wu ( twu051@auckland.ac.nz), all at the
University of Auckland, Auckland, New Zealand.
2016 ACM 0001-0782/16/12 $15.00
References
1. Allbeck, J. and Badler, N. Consistant communiction
with control. In Proceedings of the Workshop on
Multimodal Communication and Context in Embodied
Agents at the Autonomous Agents Conference (2001).
2. Bates, J. The role of emotion in believable agents.
Commun. ACM 37, 7 (July 1994), 122–125.
3. Blumberg, B.M. Old Tricks, New Dogs: Ethology and
Interactive Creatures. Ph.D. thesis, MI T, Cambridge,
MA, 1996.
4. Breazeal, C. Emotion and sociable humanoid robots.
International Journal of Human-Computer Studies 59,
1 (2003), 119–155.
5. Brinkman, W. P., Broekens, J., and Heylen, D., Eds.
Proceedings of the Intelligent Virtual Agents: 15th
International Conference (Delft, The Netherlands,
Aug. 26–28). Springer, 2015.
6. Cangelosi, A., Schlesinger, M., and Smith, L.B.
Developmental Robotics: From Babies to Robots. MI T
Press, Cambridge, MA, 2015.
7. Cassell, J. Embodied Conversational Agents. MI T
Press, Cambridge, MA, 2015.
8. Cattaneo, L. and Pavesi, G. The facial motor system.
Neuroscience & Biobehavioral Reviews 38 (2014),
135–159.
9. Debevec, P., Hawkins, T., Tchou, C., Duiker, H. P.,
Sarokin, W., and Sagar, M. Acquiring the reflectance
field of a human face. In Proceedings of the 27th
Annual Conference on Computer Graphics and
Interactive Techniques (New Orleans, LA, July 23–28).
ACM Press/Addison-Wesley Publishing Co., New York,
2000, 145–156.
10. Ekman, P. and Friesen, W. V. Facial Action Coding
System: Investigator’s Guide Part I. Consulting
Psychologist Press, Palo Alto, CA, 1978.
11. Goertzel, B., Lian, R., Arel, I., De Garis, H., and Chen,
S. A world survey of artificial brain projects. Part
II: Biologically inspired cognitive architectures.
Neurocomputing 74, 1 (2010), 30–49.
12. Gopnick, A. Why digital-movie effects still can’t do a
human face. The Wall Street Journal (Jan. 8, 2015).
13. Gothard, K. The amygdalo-motor pathways and
the control of facial expressions. Frontiers In
Neuroscience 8 (2014).
14. Heyes, C. Where do mirror neurons come from?
Neuroscience & Biobehavioral Reviews 34, 4 (2010),
575–583.
15. Jimenez, J., Sunstedt, V., and Gutierrez, D. Screen-space perceptual rendering of human skin. ACM
Transactions on Applied Perception 6, 4 (2009), 23.
16. Klehm, O. et al. Recent advances in facial appearance
capture. Computer Graphics Forum 34, 2 (2015), 709–733.
17. Knott, A. Sensorimotor Cognition and Natural
Language Syntax. MI T Press, Cambridge, MA, 2012.
18. Lee, M.H. Intrinsic activity: From motor babbling
to play. In Proceedings of the IEEE International
Conference on Development and Learning (Frankfurt
am Main, Germany, Aug. 24–27). IEEE Press, 2011.
19. MacDorman, K. and Entezari, S. Individual differences
predict sensitivity to the uncanny valley. Interaction
Studies 16, 2 (2015), 141–172.
20. Maes, P. Artificial life meets entertainment: Lifelike
autonomous agents. Commun. ACM 38, 11 (Nov.
1995), 108–114.
21. Marsella, S. and Gratch, J. Computationally modeling
human emotion. Commun. ACM 57, 12 (Dec. 2014), 56–67.
22. Mori, M., MacDorman, K.F., and Norri, K. ‘The uncanny
valley.’ Robotics & Automation Magazine 19, 2 (2012),
98–100.
23. O’Reilly, R., Hazy, T., and Herd, S. The leabra cognitive
architecture: How to play 20 principles with nature
and win! In Oxford Handbook of Cognitive Science, S.
Chipman, Ed., Oxford University Press, Oxford, U.K., 2012.
24. Panksepp, J. Affective Neuroscience: The Foundations
of Human and Animal Emotions. Oxford University
Press, 1998.
25. Parke, F. and Waters, K. Computer Facial Animation.
CRC Press, 2008.
26. Redgrave, P., Gurney, K., and Reynolds, J. What
is reinforced by phasic dopamine signals? Brain
Research Reviews 58, 2 (2008), 322–339.
27. Rohlfling, K. and Deak, G. Microdynamics of
interaction: Capturing and modeling infants’ social
learning. IEEE Transactions on Autonomous Mental
Development 5, 3 (Sept. 2013), 189–191.
28. Sagar, M., Robertson, P., Bullivant, D., Efimov, O.,
Jawed, K., Kalarot, R., and Wu, T. BL: A visual
computing framework for interactive neural system
models of embodied cognition and face-to-face social
learning. In Proceedings of the 14th International
Conference on Unconventional Computation and
Watch the authors discuss
their work in this exclusive
Communications video.
http://cacm.acm.org/videos/
creating-connection-with-autonomous-facial-animation