man driver requires an understanding
of the cognitive state of the human,
including its attention. Even in a more
constrained setting such as crowdsourcing, factors such as task design,
incentives, and training may affect human behavior. A promising direction
for future research is the modeling of
human behavior under different conditions in order to develop suitable
methods for hybrid AI systems to assess human input.
Hybrid intelligence systems aim to
use human input toward developing
better, more effective AI systems. However, in many domains, an AI system
automating tasks with high performance may not be sufficient—the real
value may come from developing AI systems that can act as effective partners
to humans. This requires a paradigm
shift from hybrid systems to hybrid
teamwork. It requires deeper reasoning
capabilities on the machine’s part to
make decisions not only about how it is
accomplishing its task, but also about
how it can support its teammates toward the success of their collaborative
activity. This research direction will
be the key to take the influence of AI to
the next level by moving away from the
automation of tasks to being effective
partners for humans to support them
and work with them in harmony.
[ 1] Stanford. Artificial intelligence and life in 2030. One
hundred year study on artificial intelligence: report
of the 2015-2016 study panel. Stanford University,
Stanford, CA, September 2016. https://ai100.
[ 2] Shademan, A., Decker, R. S., Opfermann, J. D.,
Leonard, S., Krieger, A., and Kim P. Supervised
Autonomous robotic soft tissue surgery. Science
Translational Medicine 8, 337 (2016).
[ 3] Kamar, E. Directions in hybrid intelligence:
complementing AI systems with human intelligence.
Early Career Track, IJCAI, 2016.
[ 4] Kamar, E., Hacker, S., and Horvitz, E. Combining
human and machine intelligence in large-scale
crowdsourcing. In Proceedings of the 11th
International Conference on Autonomous Agents
and Multiagent Systems-Volume 1. International
Foundation for Autonomous Agents and Multiagent
Dr. Ece Kamar is a researcher at Microsoft Research.
She works on a number of subfields of artificial
intelligmence including planning, machine learning,
and mechanism design. She is passionate about
combining machine and human intelligence towards
developing real- world applications.
© 2016 Copyright held by Owner(s)/Author(s).
Publication rights licensed to ACM.
systems are left to function without
human assistance, they commonly
make mistakes and, occasionally, fail
altogether. With AI systems playing an
ever more prominent role in the everyday lives of people by carrying out such
critical tasks as driving, mistakes and
failures not only negatively affect user
trust, they may prove fatal.
The central idea of research on
hybrid intelligence is that instead
of striving to design AI systems that
function alone, our focus should be
on hybrid systems that benefit from
human input [ 3]. A hybrid system
allows human intelligence to be integrated into the AI system throughout the latter’s life cycle in order to
develop, complement, and evaluate
machine capabilities. The need for
human oversight to overcome the
limitations of AI systems is already
acknowledged in such critical domains as medicine and driving. For
example, the driver of a semi-autonomous car is expected to continuously monitor the decisions of the
machine and correct it when needed
to prevent accidents.
Today, most AI systems that are designed to function alone benefit from
human input only during the development or training cycles. Traditionally, this has entailed the involvement
of system designers and experts. But
with the growing popularity of statistical approaches to AI, crowdsourcing
has become a widespread method for
collecting high-quality labeled data
for supervised learning of predictive
models. Once deployed, though, human involvement with the system is
minimal. However, the performance
of the system may degrade after deployment due to the changing nature
of real-world settings, or biases and
limitations incurred during training
or development. A barrier in the continuous improvement of deployed AI
systems is the failure to diagnose and
understand the errors it may commit.
Hence, using human input to continuously monitor and evaluate AI systems
is another use case for hybrid intelligence systems.
Additionally, integration of human
input into AI systems at execution time
can create reliable systems that are not
bounded by current limitations. While
interacting with users, a hybrid system
can offload computational tasks to
humans on demand. Human involve-
ment can prevent the mistakes and
failures that would result from the sys-
tem working alone, and the feedback
from humans can lead to an improve-
ment cycle for the system to continu-
ously learn from.
The vision of hybrid intelligence systems introduces a number of challenges for AI research. Human intelligence
is a valuable resource associated with
costs and constraints. The quality and
availability of human input may vary
depending on many factors, including
the mental state of the human helper.
Hybrid AI systems need to be equipped
with reasoning capabilities that allow them to make effective decisions
about accessing human intelligence.
Previous work has shown a combination of machine learning and decision-theoretic optimization techniques can
be used to make informed decisions
about accessing human input at training time [ 4]. Hybrid AI systems present
an opportunity for generalizing these
techniques to the execution and evaluation phases.
An AI system grappling with the decision of accessing human help needs
to have an understanding of the capabilities of its helper, and the costs and
constraints associated with asking for
help. As opposed to the computational
resources used in the development
of the system, human helpers do not
come with a specification. In a setting
such as semi-autonomous driving, effectively seeking assistance from a hu-
The history of
how human labor
advances shows jobs
lost to technology
are replaced by
new types of jobs.