animals by predicting their behavior
and location. Its video exhibits how a
well-intentioned motivation can become
susceptible to a flawed outcome when
programmed with narrow intentions
(Figure 5).
THE VALUE OF SPECULATION
By revealing the limitations and biases
of AI, Committee of Infrastructure
demonstrates how AI systems are
subject to the same fallibility that
is present in human-to-human
interaction. Not only are the code,
personality, and data necessary to
create a functioning AI, but the ethical
framework that guides how these
systems interact is fundamental too.
AI cannot be blindly trusted; it should
be subject to the same form of scrutiny
as a bill, law, or ballot measure. The
project illustrates why developers and
researchers should create a record of
motivations, origin, and data sources
visible to everyone [ 8]. In detailing
all the variables, a discussion among
a broader set of stakeholders should
challenge preexisting assumptions
and provide evidence to thoroughly
negotiate how these systems influence
daily life.
Ultimately, Committee of
Infrastructure proposes that
speculation is essential for working
through the uncertainty, complexity,
and ambiguity of AI problems.
Designers have the opportunity to
make sense of what is not yet truly
understood in both capability and
application. Further, AI issues are
incredibly difficult and esoteric.
Committee of Infrastructure utilizes
speculation to show how humor and
play can be used to examine serious
issues like surveillance and governance,
allowing designers, developers, citizens,
and policymakers to have a dialogue
about how we want AI to influence our
daily life. By using humanistic values,
designers can promote new forms of
interaction that facilitate inclusive and
compassionate experiences. This field
affords new opportunities that call for
radical ways of working.
Endnotes
1. Deep learning is a method that utilizes vast
amounts of data to learn. It uses the model
of neural networks to detect and predict.
2. Google Duplex, a virtual assistant released
in 2018, can schedule an appointment so
skillfully that it is impossible to discern
whether a human or computer is speaking.
The assistant pauses, intonates, and
affirms just like a human.
3. Crowdflower and Amazon’s Mechanical
Turk are platforms that solicit humans
to annotate data to be used by ML. ML
needs copious amounts of data to be
implemented.
4. Biases such as stereotyping, attentional
bias, and confirmation bias can become
amplified when incorporated into machine
learning. For example, the COMPAS
algorithm used by the Department of
Corrections in Wisconsin, New York,
and Florida has led to harsher sentencing
toward African Americans. See Angwin,
J., Larson, J., Mattu, S., and Kirchner,
L. Machine bias. ProPublica. May 23,
2016; https:// www.propublica.org/
article/machine- bias-risk-assessments-in-
5. Dunne, A. and Raby, F. Speculative
Everything: Design, Fiction, and Social
Dreaming. MIT Press, Cambridge, MA, 2013.
6. Karpathy, A. The unreasonable
effectiveness of recurrent neural
net works.” Andrej Karpathy Blog. May 1,
2015; karpathy.github.io/2015/05/21/
rnn-effectiveness/.
7. For example, the L. A. DOT representative
learned to speak from City of Los Angeles
Transportation Impact Study Guidelines
and Traffic Studies Policy and Procedures.
8. The idea of data provenance was
introduced at the 2017 AAAI symposium.
Jason Shun Wong is a designer, researcher,
and strategist of interactions and critical media
working in emerging technology. His work
focuses on the intersection of smart cities,
networking, behavioral psychology, object-oriented ontology, civics, Chinese science
fiction, and meme culture.
→ info@jasonshunwong.com
https://jasonshun wong.com/ work/
committee-of-infrastructure-part-2/
DOI: 10.1145/3274568 COPYRIGHT HELD BY AUTHOR. PUBLICATION RIGH TS LICENSED TO ACM. $15.00
Figure 4. Transcript excerpts created using char-rnn. Human and AI representatives arguing
for their respective organizations in the vernacular learned by training the ML algorithm on
seminal texts important to the ethos of each organization ( https://vimeo.com/250998475).
Figure 5. PE TA’s image-classification algorithm over-optimizes its computer-vision
analysis, leading to incorrect classification of vehicles and people as animals (https://vimeo.
com/213373126).