iment, they first study the nurses, general practitioners, or other professionals who typically conduct the session
they are trying to simulate, then base
their agents on these individuals. All
of the agent’s responses are approved
by health professionals in advance.
Natural language exchanges would be
too risky in a healthcare environment,
according to Bickmore. Instead, participants in the studies typically interact with the virtual agent through an
iPad or some other large touchscreen
device, and the conversation follows a
narrative tree. The virtual agent asks a
question or delivers a spoken prompt.
The participant selects a response
from a multiple-choice list. The conversation moves on.
The USC platform, SimSensei, monitors pauses in conversations and tracks
changes in tone, gaze aversion, and
other social cues. SimSensei then processes all these indicators and autonomously determines the appropriate response. This is a risky interaction, Lucas
explains. “What if the system told you
ON SCREEN, THE virtual char- acter sits in a comfortable purple chair. She wears plain pants, a turquoise shirt, and a slim jacket with
the sleeves rolled up past her elbows.
Her short dark hair is swept to one side
and her ethnicity is intentionally ambiguous, according to her developers, a
team of researchers with the University
of Southern California (USC) Institute
for Creative Technologies. Some of the
people who have interacted with her
assume she is Asian; others conclude
she has a completely different ethnicity. “People have come up and said that
they’re so thankful we paired them with
someone of their race because it helped
them connect,” recalls Gale Lucas, a research assistant professor at USC.
The platform, SimSensei, is designed for one-on-one sessions with
individuals, and uses visual and audio
feedback to tailor its responses. In one
study, veterans who submitted to counseling sessions with SimSensei shared
personal and mental health concerns
they would have withheld from actual
human therapists. The system is designed to encourage this kind of open
interaction, engaging in active listening by offering affirming or comforting
responses or noting when the subject
pauses or hesitates—and asking why.
Human therapists carry out these techniques intuitively, yet Lucas and her colleagues found the participants were still
more open with the virtual platform.
Northeastern University computer
scientist Timothy Bickmore and his
team have found similarly surprising
connections between people and virtual agents across numerous studies.
Typically, Bickmore and his team will
The Cult of Personality
try to simulate a counseling or infor-
mation-sharing session between a
patient and a healthcare professional,
then measure the effectiveness of virtu-
al agents against their human counter-
parts. “We try to simulate face-to-face
counseling,” Bickmore explains. “We
have found over the years that many
disadvantaged groups prefer and do
much better with agents and robots
compared to those with high-level tech
literacy. They can get the information
better. The people don’t feel they’re be-
ing talked down to.”
As robots become an increasingly
present and powerful force in our
lives, from healthcare to home main-
tenance to the workplace, researchers
are hard at work exploring different
ways to strengthen the bonds between
people and both virtual and physical
agents. Some of the lessons learned in
developing virtual platforms apply to
embodied robots; in other cases, the
rules appear to be different. What has
become clear, researchers say, is that
there is no simple recipe for develop-
ing likable robots.
Before Bickmore and his team develop
a new virtual agent for a specific exper-
Society | DOI: 10.1145/3339470 Gregory Mone
a Robot Likable?
Interactions with robotics teach us more about people.
“And how did that make you feel?” is a question the SimSensei virtual therapist, shown
above, might ask. Image courtesy of USC Institute for Creative Technologies.