This work was conducted as part of a research project at Disney Research: Pittsburgh. This work is based on a paper to
appear in the CHI 2016 proceedings.
[ 1] Li, H., Ye, C., and Sample, A. P. IDSense: A human
object interaction detection system based on
passive UHF RFID. In Proceedings of the 33rd Annual
ACM Conference on Human Factors in Computing
Systems (CHI 2015) (April 18–23, Seoul). ACM, Ne w
York, 2015, 2555–2564.
[ 2] Schwarz, J., Mankoff, J., and Hudson, S. E. Monte
Carlo Methods For managing interactive state,
action, and feedback under uncertainty. In
Proceedings of the 24th Annual ACM Symposium on
User Interface Soft ware and Technology (Oct. 16-19,
Santa Barbara, CA). ACM, New York, 2011, 235–244.
[ 3] Greenberg, S. and Fitchett, C. Phidgets: Easy
development of physical interfaces through
physical widgets. In Proceedings of the 14th Annual
ACM Symposium on User Interface Software and
Technology (UIS T 2001) (Nov. 1114, Orlando). ACM,
Ne w York, 2001, 235–244.
[ 4] Laput, G., Brockmeyer, E., Hudson, S. E., and
Harrison, C. Acoustruments: Passive, acoustically-driven, interactive controls for handheld devices.
In Proceedings of the 33rd Annual ACM Conference
on Human Factors in Computing Systems (CHI
2015) (April 18–23, Seoul). ACM, Ne w York, 2015,
Andre w Spielberg is a second-year Ph. D. student at the
Computer Science and Artificial Intelligence Laboratory
(CSAIL) at MI T, where he works in the intersection of
fabrication and robotics. His current research exploits
data-driven methods for optimizing the design and behavior
of 3-D printed robots. His prior research has focused on
automated assembly. Prior to joining MI T he received his B. S.
and master’s from Cornell University and spent time at The
Johns Hopkins’ Applied Physics Laboratory.
Alanson Sample is a research scientist at Disney Research,
Pittsburgh where he leads the Wireless Systems group.
His research focuses on enabling new guest experiences
and sensing and computing devices by applying novel
approaches to electromagnetics, RF and analog circuit
design, and embedded systems.
Scott Hudson is a professor of human-computer
interaction in the School of Computer Science at Carnegie
Mellon University, where he serves as the founding
director of the HCII Ph. D. program. He received his Ph. D. in
computer science from the University of Colorado in 1986,
and has previously held faculty positions at the University
of Arizona and the Georgia Institute of Technology. Elected
to the CHI Academy in 2006, he has published extensively
on technology-oriented HCI topics, and recently received
the Allen Ne well Award for Research Excellence at CMU.
Jennifer Mankoff is an associate professor in the Human
Computer Interaction Institute at Carnegie Mellon
University. She earned her B. A. at Oberlin College and
her Ph.D. in computer science at the Georgia Institute of
Technology. Her research enhances the human experience
with technology. Her goal is to combine empirical methods
with technological innovation to construct middleware
(tools and processes) that can enable the creation of
impactful applications. Most recently, this work has
focused on 3-D printing and its potential for creating
custom assistive technologies for people with disabilities.
James McCann obtained his Ph. D. in 2010 from Carnegie
Mellon University. His research hours are spent at Disney
Research Pittsburgh developing systems and interfaces
that operate in real-time and build user intuition; lately,
he has been dabbling in the creation of physical objects.
He also makes video games as TCHO W llc, including recent
releases “Rktcr” and “Rainbo w.”
© 2016 Copyright held by Owner(s)/Author(s).
Publication rights licensed to ACM.
the IDs with which those tags should
At the low-level, though, we recognized our pre-defined widgets may not
be expressive enough for all applications. That’s why, for the experienced
users, we exposed the lower-level API
for interacting with the probabilistic
program state. In order to make it possible for experienced users to develop
their own physical widgets and their
While this project is by far not the
first to allow users to build physical widgets that digital programs can be built
on top of [ 3, 4], the fact that RFID tags,
which are small and thin, have very few
geometric constraints makes it very
easy to place them any where in designs.
This makes it easy to grow large, expressive widget libraries. In the future, it
will be exciting to see how RFID tags
and other similar, versatile sensors, will
allow online communities to grow large
widget libraries much in the way maker
communities such as Thingiverse currently share pure . STL files.
PUTTING IT ALL TOGETHER
For now, we’ve created a few demo applications to show off the promise of a
toolkit, which is a synthesis of our application pipeline (see Figure 3) and
Sketchup Front-End (see Figure 4).
Using RFID tags on tokens and token slots, we were able to build a wireless, low-latency, physical game of
Tic-Tac-Toe. Here, we used our token
widget, which places tokens opposite
conductive foil to measure whether
or not token/slot pairs are visible. Using the IDs of the tags, we can identify which token is placed, when it is
placed, and where it is placed. When
the widget is added to the design, our
Sketchup extension adds the appropriate token and slot geometry to the digital design files, and automatically generates all of the code for tracking this
interaction. The only code the user has
to add is the traditional deterministic
game of Tic-Tac-Toe, and the visual
and auditory feedback for the players,
all of which can be written in fewer
than 100 lines of C# code.
In another example (see Figure 5),
we used our slider widget, which fea-
tures a conductive cover that slides
atop a line of RFID tags. Our automati-
cally generated interaction code esti-
mates the state of the slider based on
which tags are visible to the RFID read-
er and which are masked by the cover.
We 3-D printed one controller and laser
cut the other (just to show we could),
and painlessly coded up a flashy demo
of the classic arcade game Pong using
our sliders as wireless controllers.
As a final example (see Figure 6),
we demonstrated our RFID tags’ motion sensing capabilities with a simple
spaceship-based demo. We designed
a simple toy spaceship and placed a
raw tag widget on the design, which,
while not adding new geometry to the
design, generated code for touch and
motion callbacks. This demo is particularly friendly to novice programmers.
Using our API, it was easy to translate
our toy’s motion to the digital on-screen motion of a virtual spaceship,
only writing new code for on-screen
animation. (Dong Nguyen, we eagerly
anticipate your Flappy Bird port for our
As RFID tags become more robust and
tag readers become cheaper with each
passing year, RFID sensing is rapidly becoming a serious contender for making
physical fabrication projects interactive. RFID sensing provides a platform
that is easy to design with and even
easier to interact with and use. A future
where anybody can quickly fabricate
wirelessly powered novel game controllers, smart-home devices, personal robots, and more is right around the corner. It will be exciting to see how other
sensors can be hacked through similar data-driven methods, to go beyond
their original intended purpose for use
in interactive fabrication projects.
Through a combination of inexpensive, easy-to-use sensors, and more systems that marry physical design with
digital design, people will finally feel
empowered to make devices based on
how they are meant to be used, and
not just on how they are meant to look.
Novice makers will finally be able to
design and fabricate devices that fully
capture the interactive nature of their
imagination. And when interactive objects are as easy to make as static ones,
the personal fabrication movement
will truly be ready to take off.