Karl D.D. Willis
My work with Ivan Poupyrev at Disney Research explores novel ways to interact
with projected content for gaming and collaborative interaction. We draw from
the tradition of pre-cinema handheld “magic lanterns” that animate projected
imagery using physical movement of the projection device. Rather than attempt
to mitigate the effects of projector movement, we encourage it by using the
projector as a gestural input device. Our latest motion beam prototype, SideBySide, for instance, supports multi-user interaction with device-mounted cameras
and hybrid visible/infrared light projectors to track multiple projected images in
relation to one another [ 1]. This is accomplished by projecting invisible fiducial
markers in the near-infrared spectrum. Users interact together to share digital
content, play projected games, or explore educational media. The system does
not require instrumentation of the environment and allows multi-user interaction
almost anywhere (as illustrated in the photo).
• SidebySide gaming.
ENDNOTE:
1. Willis, K. D.D., Poupyrev, I., Hudson, S.E., and Mahler, M. SideBySide: Ad-hoc multi-user interaction with handheld projectors. Proc. UIST ’ 11. ACM, New York, 2011.
Karl D.D. Willis is a Ph. D. candidate in computational design at Carnegie Mellon University and a
lab associate at Disney Research.
March + April 2012
interactions
Content sharing using projectors as an input device has also
been explored. Consider, then, the
sometimes time-consuming and
frustrating mobile-pairing and
file-transfer procedures we have
to endure. Instead, the “
burn-to-share” system combines mobile
projection and optical image capture to carry out these tasks [ 5].
Besides working from a person’s
hand, projectors can be attached
to static places (e.g., tables or
walls) or fixed onto other moving
objects, such as cars or bicycles.
All kinds of attachments to people
fall into this second category, with
projectors mounted on heads,
wrists, belts, or pendants. This
allows interaction with the pro-
jected content using your free
hands, as has been demonstrated
in the widely known Sixth-Sense
project developed at the MIT Media
Lab [ 6]. This wearable gestural
interface augments the physical
world around us with projected
information. By means of an addi-
tional camera, hand gestures can
be recognized on the augmented
objects directly using computer-
vision techniques.