team came up with an alternative solution to a mechanical face, instead creating a so-called LightHead that relies
on a small projector emitting an image
through a wide-angle lens onto a semi-transparent mask. Using a combination of open-source Python and Blender software, the team found it could
generate realistic 3D facial images extremely quickly.
“We are very sensitive to the speed
with which people respond and with
which the face does things,” Belpaeme
says. “A robot needs to bridge the gap
between the internal digital world and
the analogue of the external world,” he
explains. “If a user claps his hands in
front of the robot’s face, you expect the
robot to blink.”
using Psychological methods
Bill Smart’s team at Washington University is taking a different tack to the
problem of human-robot interaction,
applying psychological methods to explore how human beings react to robots in their midst.
For example, the team has learned
that people tend to regard a robot as
more intelligent if it seems to pause
and look them in the eye. These kinds
of subtle gestures can help humans
form a reliable mental model of how a
robot operates, which in turn can help
them grow more comfortable interacting with the machine.
“If we’re going to have robots and
have them interact, we need a model
of how the robot ‘thinks,’” says Smart.
“We need to design cues to help people
predict what will happen, to figure out
the internal state of the system.”
Smart’s team has also pursued a
collaboration with the university’s dra-
ma department, and recently hosted a
symposium on human-robot theater.
“Theater has a lot to say, but it’s hard
to tease it out. We don’t speak the
Actors provide particularly good
role models for robots because they are
trained to communicate with their bod-
ies. “If you look at a good actor walk-
ing across the stage, you can tell what
he’s thinking before he ever opens his
mouth,” says Smart. For actors, these
are largely intuitive processes, which
they sometimes find difficult to articu-
late. The challenge for robotics engi-
neers is to translate those expressive
impulses into workable algorithms.
In a similar vein, Heather Knight, a
doctoral candidate at Carnegie Mellon
University’s (CMU’s) Robotics Institute,
has pursued a collaboration with the
CMU drama department to create socially intelligent robot performances.
Lately she has entertained stage audiences with Data, a cartoonish humanoid robot with a stand-up routine that
she has honed through observation of
working comedians in New York City.
“What’s new about these robots is
that they’re embodied,” says Knight.
“We immediately judge them in the
way we do people. Not just in physical
terms, but also in terms of how they
move and act.”
In order to help audiences feel more
at ease around her robot, she has found
that accentuating the physical differ-
ences and making the robot even more
“robotic” helps people relate to Data
Another sophisticated cartoonish
robot named TokoTokoMaru has
recently entertained YouTube audiences with its precise rendition of the
famously demanding Japanese Noh
dance. Created by robot maker Azusa
Amino, the robot relies on aluminum
and plastic parts with Kondo servomotors in its joints to create its sinuous
Azusa had to make the robot as light-
weight as possible while maximizing
the number of controlled axes to allow
for maximal expression. “Robots have
fewer joints than humans, and they
can’t move as fast,” he explains, “so to
mimic the motions of human dance, I
made a conscious effort to move many
joints in parallel and move the upper
Beyond the technical challenges,
however, Azusa also had to step into
the realm of aesthetics to create a “Jap-
anese-style robot” with a strong sense
of character. To accentuate its Japane-
seness, the robot wields delicate fans
in both hands, while its hair blows in a
gentle breeze generated by a ducted fan
typically used in radio control airplanes.
Delaunay, F., de Greeff, J., and Belpaeme, T.
A study of a retro-projected robotic face
and its effectiveness for gaze reading by
humans, Proceeding of the 5th ACM/IEEE
international conference on Human-robot
interaction, Osaka, Japan, March 2–5, 2010.
Eight lessons learned about non-verbal
interaction through investigations in
Robot, International Conference on Social
Robotics, Amsterdam, The netherlands,
nov. 24–25, 2011.
Morse, A., De Greeff, T.,
Belpaeme, T., and Cangelosi, A.
Epigenetic Robotics Architecture (ERA),
IEEE Transactions on Autonomous Mental
Development 2, 4, Dec. 2010.
Park, I., Kim, J., Lee, J., and Oh, J.
Mechanical design of the humanoid robot
platform, hUBO, Advanced Robotics 21, 11,
Smart, W., Pileggi, A., and Takayama, L.
What do Collaborations with the Arts Have
to Say About Human-Robot Interaction?
Washington University in St. Louis, April 7,
Alex Wright is a writer and information architect based in