Simulate then implement AI. To
support the simulation of AI, the toolkit
has a marionette system that allows the
designer to stand in for the AI in the
initial design phases. Using the toolkit,
the designer can puppeteer the behavior
of the prototype in real time while
observing how people experience and
interact (Figure 8). This is supported
with a standard protocol (OSC) for
a remote control using a phone or
tablet to wirelessly trigger predefined
behaviors, simulating the AI.
The behaviors that the marionette
controls can be pre-built or designer-improvised sequences that include
movement, speaking, sensing, sound,
and light. More subtle kinds of control
can also be used—for example, the
puppeteer could vary aspects of an
algorithm in real time such as the
amount of randomness, confidence
thresholds for decision making, or the
mood for how the system responds to
people (e.g., patient, provocative, stern).
In addition, the puppeteer could swap
out different machine-learning models
(e.g., trained on different data sets)
during a test to see how they perform in
actual use with people.
This Wizard of Oz ( WOZ) approach
allows the designer to easily sketch the
AI for themselves, for collaborators, and
for usability testing, before investing in
a functional AI system. Once the design
matures, the toolkit allows the designer
to replace the WOZ version of AI with
appropriate functional implementations
of algorithmic AI.
In discussions about this at the AAAI
symposium, one insight that came to
light is that this iterative, experimental
approach for the design of AI can not
only help define the MVP (minimum
viable product) in agile terms, but
also help with narrowing down to the
MVD—a term we invented that refers
to the minimum viable data needed to
train the ML models.
Data collection, tagging, and ML
model training is an expensive and
time-consuming process. Defining an
MVD strategy in the protoyping phase
(perhaps in collaboration with a data
scientist [ 5]) could lead to significant
reductions in cost and time to market,
or new insights about the characteristics
of the data needed.
To facilitate identifying the
appropriate scope and quality of
data, the tool will allow designers to
experiment with simulated ML and
Figure 7. Simplifying the node system: It is common for node-based visual authoring systems
to become large and complex. To reduce this problem, the strategy is to make the nodes do
more. On the left is an early version of the toolkit that uses five nodes to perform a sequence
of actions. On the right is a new approach that consolidates the same sequence of actions into
a single node.
Figure 8. Using a wireless tablet to marionette a behavior in response to simulated voice
recognition. Here the designer is pressing a button remotely from the user test, making the
system seem to react to a voice command (the robot moves backward when the user says
“Goodbye”) when in fact there is no voice-to-text system active ( https://youtu.be/vKxXVijCcdk).
Figure 9. The 3D cube moves and behaves like the intended physical device would, driven by
the visual node system on the right. Once the behavior is defined, the physical robot enacts
the same behaviors (sensing, navigating, moving, lighting up) as the simulated 3D device on
the screen—see Figure 1 ( https://youtu.be/vKxXVijCcdk).
Figure 10. A simple visually designed algorithm for navigating using a proximity sensor labeled
/analog/0. The robot reports the real-time value from the sensor, and the toolkit checks that
value to decide what action to take. On the left, the behavior runs if the sensor reports a value
of 42 (actually a range around that value) and moves the robot for ward. On the right, if the robot
gets too close to an object (less than 27), it will turn right and then move forward.