I
M
A
G
E
F
R
O
M
S
H
U
T
T
E
R
S
T
O
C
K
.
C
O
M
the AI by enabling interaction. As we
will illustrate, there is a central tension
between a concise explanation and an
accurate one.
As shown in Figure 2, our survey
focuses on two high-level approaches
to building intelligible AI software:
ensuring the underlying reasoning or
learned model is inherently interpre-
table, for example, by learning a linear
model over a small number of well-
understood features, and if it is neces-
sary to use an inscrutable model, such
as complex neural networks or deep-
look ahead search, then mapping this
complex system to a simpler, explana-
tory model for understanding and con-
trol.
28 Using an interpretable model
provides the benefit of transparency
and veracity; in theory, a user can see
exactly what the model is doing. Unfor-
tunately, interpretable methods may
not perform as well as more complex
ones, such as deep neural networks.
Conversely, the approach of mapping
to an explanatory model can apply to
whichever AI technique is currently
delivering the best performance, but
its explanation inherently differs from
the way the AI system actually operates.
This yields a central conundrum: How
can a user trust that such an explana-
tion reflects the essence of the underly-
ing decision and does not conceal im-
portant details? We posit the answer is
to make the explanation system inter-
active so users can drill down until they
are satisfied with their understanding.
The key challenge for designing in-
telligible AI is communicating a com-
plex computational process to a hu-
man. This requires interdisciplinary
skills, including HCI as well as AI and
machine learning expertise. Further-
more, since the nature of explanation
has long been studied by philosophy
and psychology, these fields should
also be consulted.
This article highlights key approaches
and challenges for building intelligible
intelligence, characterizes intelligibility,
and explains why it is important even
in systems with measurably high per-
formance. We describe the benefits
and limitations of GA2M—a power-
ful class of interpretable ML models.
Then, we characterize methods for
handling inscrutable models, discuss-
ing different strategies for mapping
to a simpler, intelligible model appro-
priate for explanation and control. We
sketch a vision for building interactive
explanation systems, where the map-
ping changes in response to the user’s
needs. Lastly, we argue that intelligi-
bility is important for search-based AI
systems as well as for those based on
machine learning and that similar so-
lutions may be applied.
Why Intelligibility Matters
While it has been argued that explanations are much less important than
sheer performance in AI systems, there
are many reasons why intelligibility is