autonomous vehicles (cars and ships)
points to the potential value of this
method in the training/coaching and
behavior change phase of advanced
driver-assistance systems (ADAS). By
using autoconfrontation, we can expose
mismatches between user experience
and automated technology behavior.
This can provide a cybernetic loop,
which can help to improve the design
of autonomous systems throughout the
TIME TO SPEAK HUMAN
By Jody Medich
We designed artificial intelligence—
like all tools—to extend a human
ability: our ability to think. We created a
cybernetic model with most of the onus
on the human to maintain the system
and encode the exchanged information
into computer speak. The tech side of
the cybernetic system has not spent a
lot of time adapting and learning from
the human side, except in this encoded
translated format. The more fluent the
human in tech speak, the more powerful
the things they can accomplish together.
But technology has finally matured
to a level where it is able to speak more
fluently and efficiently to itself than we
will ever be able to do again. Soon it will
write its own code more effectively than
we ever could. And it’s able to encode
the real world into computer speak
without waiting for human translation.
We will never again dominate
conversations in that language.
And that’s great. That language,
and the cybernetic system around it,
is terrible for humans. As a tool, it is
unwieldy and noisy.
Great tools disappear into the hand.
We don’t think of their operation.
Instead, they act as an extension of the
body. To our brains, they appear quite
similar. Swinging a well-made hammer
and walking generate the same level of
pink noise in the brain. Barely requiring
much attention, the not quite random
randomness of the activity is more
like controlled falling than conscious
thought. All relevant information is
sensory data—the native language of
the brain—ensuring that most of the
mental effort to do the tasks is relegated
to background cognitive abilities. The
absence of constant data translation
allows the brain to focus attention on
more difficult, higher-level functions
such as deep creative thinking, rather
than the operating of the tool or the body.
The operation of technological tools
rarely reaches that level of functioning.
Instead, the human-machine interface
is incredibly noisy. Not because of the
amount of data, but rather because of
the encoding of the data into a system
that relies heavily on linguistics and
human working memory to carry tidbits
of information across multiple contexts.
Unfortunately, that part of our brain
is very easily interrupted by context
switches and maintains only contextually
pertinent data. When the context
switches, the brain resets the working
memory for the new situation and wipes
out all the previous information. This is
called the doorway effect, and it’s why you
so easily forget that you want a glass of
water when you walk from the couch into
the kitchen. The level of noise inherent in
the technological tool is deafening, killing
our ability to think. Which is ironic,
considering that AI tools are designed to
amplify our cognition.
Technology is finally capable
of thinking. Soon it will reach the
exaflop barrier, equal to the cognitive
computing power of the human
mind. It’s time to radically update the
cybernetic system that we rely on to
interact with technology. We need to
teach technology to speak human—to
encode all that data into a multisensory
physical interface that can disappear
into our hands.
Though cybernetic thinking is rooted
in the past, we believe it may be the way
forward for design, especially as our
products increasingly embody forms
of intelligence. One unifying theme
from our conversation was around
embedding cybernetic thinking into our
design processes and tools. As Dubberly
suggests, our current design methods
may not suffice in the new world of
AI-enabled products. Designers will no
longer explicitly craft the interactions
between a product and a user. Instead,
we will need to design the meta-systems
that design a product’s interactions.
This suggests a need for new tooling
to develop these meta-systems. But
as Medich suggests, we may need to
fundamentally redesign our current
interaction model with our computing
tools. In doing so, we may need to slow
down and observe ourselves. Some of
the methods that Forster suggests, such
as participant observation and self-
observation, may provide the designer
with a feedback loop on their own
process. Moving forward, we would
argue that more designers should learn
about cybernetics. More important,
those who are developing design tools
should consider augmenting their tools
with feedback systems that can support
designers both with the design project at
hand and their design process.
1. Wiener, Norbert. Cybernetics or Control
and Communication in the Animal and the
Machine. Vol. 25. MI T press, 1961.
2. Dubberly, H. and Pangaro, P. Cybernetics
and design: Conversations for action.
Cybernetics & Human Knowing 22, 2–3
3. Glanville, R. Researching design and
designing research. Design Issues 15, 2
4. Pias, C. Cybernetics - The Macy Conferences
1946–1953: The Complete Transactions.
Chicago Univ. Press, 2016.
5. Clot, Y., Faïta, D., Fernandez, G., and
Scheller, L. Entretiens en autoconfrontation
croisée: une méthode en clinique de
l’activité. Education permanente 146, 1
6. Cahour, B. and Licoppe, C. Confrontations
with traces of one’s own activity. Revue
d’anthropologie des connaissances 4, 2 (2010),
7. Wahlström, M., Seppänen, L., Norros, L.,
Aaltonen, I., and Riikonen, J. Resilience
through interpretive practice – A study
of robotic surgery. Journal manuscript
submitted for publication.
8. Boer, E.R., Joyce, C. A., Forster, D.,
Chokshi, M., Mogilner, T., Garvey, E., and
Hollan, J. Mining for meaning in driver’s
behavior: A tool for situated hypothesis
generation and verification. Proc. of
Measuring Behavior 2005: 5th International
Conference on Methods and Techniques in
Nikolas Martelaro is a researcher at
Accenture Technology Labs. He is interested
in interaction design for autonomous systems
and design methods. He received his Ph.D.
in mechanical engineering from Stanford
University’s Center for Design Research.
Wendy Ju is an assistant professor of
information science at Cornell Tech interested
in how people respond to interactive and
autonomous technologies. She received her
Ph.D. in mechanical engineering from Stanford
University’s Center for Design Research.
DOI: 10.1145/3274570 COPYRIGHT HELD BY AUTHORS. PUBLICATION RIGHTS LICENSED TO ACM. $15.00