then use a simple dropdown menu to
switch between different ML models
to see how they behave in use (e.g.,
pointing a camera while the model
performs object recognition). In this
way, the prototype can show how
different data and training impact
speed, accuracy, and the experience.
This will help the designer understand
how the data and model fit project
goals, as well as identify issues such
as unwanted bias, bad data, and
unexpected outcomes.
Moreover, the ability to play with
data will, over time, give the designer
a better sense of how data collection,
training, and ML algorithms can be
used most effectively and creatively
while considering ethics and civic
responsibility.
Simulate then implement hardware.
To avoid the time and expense of
working with physical electronics and
mechanics too early in the process, the
toolkit allows the designer to simulate
the behavior of a physical AI device in
3D within Unity. Whether they’d like
to add a simple tabletop device or an
autonomous car, designers can sketch
a range of physical behaviors (changing
shape, speaking/listening, seeing other
objects, moving around, etc.) in 3D on
a screen, through the visual authoring
system (Figure 9).
To create a more immersive experience with the virtual, the toolkit will
ultimately allow interaction with the
3D simulation through augmented
reality, merging the virtual device with
reality.
When the designer is ready to move
to a working hardware prototype, a
critical moment in the design process,
the toolkit makes this process easier
and relatively seamless. It does so by
supporting a standard set of behaviors
that work in both 3D simulation and
on a physical robotic platform (Figures
10 and 11).
A WORKING PROTOTYPE
The Delft AI Toolkit is an experiment
intended to provoke discussion about
tools for AI. But it is also a working
prototype and will soon be usable
by designers to experiment and gain
a stronger understanding of AI as a
material of design.
As the system develops, the plan
is to support behavior trees, internal
machine-learning blocks, Unity’s
reinforcement learning agents, APIs for
cognitive services such as IBM Watson,
and an interface to external ML models
such as Google’s TensorFlow.
In addition, plans include providing
working examples in the toolkit that
help designers understand how a range
of machine-learning approaches work
(and don’t work).
The system is available on GitHub
( github.com/pvanallen/delft-
toolkit-v2), and I plan several releases
through fall 2018 and beyond that will
be stable enough to use. I encourage
designers and others to try it out and
provide feedback. You can follow
progress on philvanallen.com/portfolio/
delft-ai-toolkit/
ACKNO WLEDGMEN TS
To pursue the goal of deeper
engagement for designers in the
AI field, in fall 2017 I took a leave
of absence from the Media Design
Practices MFA program at ArtCenter
College of Design to work on a toolkit
for the design of AI. This was supported
by a research fellowship at TU Delft
in the Netherlands, funded by Design
United. It took place in the Industrial
Design Engineering program, working
closely with the idStudioLab and
professor Elisa Giaccardi. The project
is currently supported by a small grant
from the ArtCenter Faculty Council.
Endnotes
1. Hartmann, B. Klemmer, S.R., Bernstein,
M., Abdulla, L., Burr, B., Robinson-Mosher, A., and Gee, J. Reflective
physical prototyping through integrated
design, test, and analysis. Proc. of the
19th Annual ACM Symposium on User
Interface Soft ware and Technology. ACM,
New York, 2006, 299–308; https://doi.
org/10.1145/1166253.1166300
2. Marenko, B. and van Allen, P.
Animistic design: How to reimagine
digital interaction between the
human and the nonhuman. Digital
Creativity 27, 1 (2006), 52–70. DOI:
10.1080/14626268.2016.1145127;
https://www.tandfonline.com/doi/full/
10.1080/14626268.2016.1145127
3. van Allen, P. Reimagining the goals and
methods of UX for ML/AI. AAAI Spring
Symposium Series 2017; https://aaai.
org/ocs/ index.php/SSS/SSS17/paper/
view/15338
4. van Allen, P. Rethink IxD. Philip van
Allen. May 9, 2016; https://medium.
com/@philvanallen/rethink-ixd-
e489b843bfb6.
5. Yang, Q. et al. Investigating how
experienced UX designers effectively
work with machine learning. Proc. of
the 2018 Designing Interactive Systems
Conference. ACM, 2018, 585–596.
DOI: 10.1145/3196709.3196730.
Philip van Allen is a professor at ArtCenter
College of Design interested in new models for
the IxD of AI, including non-anthropomorphic
animistic design. He also develops tools
for prototyping complex technologies and
consults for industry. He received his B. A. in
experimental psychology from the University of
California, Santa Cruz.
→ vanallen@artcenter.edu
DOI: 10.1145/3274566 COP YRIGHT HELD BY AUTHOR. PUBLICATION RIGH TS LICENSED TO ACM. $15.00
Figure 11. The actual robot enacting the algorithm in Figure 10 by reporting values from the proximity sensor and following the logic dictated by
the toolkit. From left to right, the robot moves toward the object, turns, and moves for ward ( https://youtu.be/gn1e1ZpLe2o).