a Graphical Sense of touch
By Pat Hanrahan
one oF the major innovations in computing was the invention of the graphical user interface at MIT, SRI, and
Xerox PARC. The combination of computer graphics hardware with a mouse
and keyboard enabled a new class of
highly interactive applications based
on direct manipulation of on-screen
It is interesting to reflect on the relative rate of advance of input and output
technology since the very first systems.
At the time, graphics hardware consisted of single-bit framebuffers output-ting to black-and-white displays.
Moving forward, we now have
flat-panel, high-definition, full-color
displays driven by inexpensive, high-performance graphics chips capable
of drawing three-dimensional virtual
worlds in real time.
Graphics hardware dra ws tens of millions of polygons and tens of billions of
pixels per second. In comparison, most
personal computers ship with a mouse
and keyboard similar to that used at Xerox PARC in the 1970s.
The lack of progress in input technology has caused computers to become sensory deprived. They can
output incredible displays of information, but they receive almost no
information from their surrounding
Contrast this situation to a living organism. Most organisms have extraordinary abilities to sense their environment, but limited ability to display
information (except by movement; a
few animals like the chameleon and
the cuttlefish can change their skin
color). Perhaps this explains why we
enjoy interacting with our pets more
than with our computers.
Stuart Card, a Senior Research Fellow at Xerox PARC, has observed that
one of the breakthrough ideas in the
graphical user interface was to amplify input relative to output. One
mechanism is to enable input to be on
output. Examples include on-screen
buttons and menus. By leveraging
output technology, we augment lim-
ited input by providing context. Another strategy for enhancing input is
to use pattern recognition to extract
as much information as possible from
the stream of sensed data.
Fortunately, this state of sensory deprivation is beginning to change.
The biggest recent development is
the commercial emergence of multi-touch displays. Traditional display input technology only returns a single X,
Y position at a time. As a result, the user
can only point to a single location at a
time and, consequently, use only one
finger or one hand at a time.
In a multitouch display, multiple
points are sensed simultaneously.
This allows the application to sense
multiple fingers from both hands.
This in turn makes it possible to recognize finger gestures or coordinated
Successful commercial examples
of multitouch displays include the Apple iPhone and the Microsoft Surface.
The iPhone has a unique user interface that is enabled by an embedded
multitouch display. To zoom into a
map, the user simply moves their fingers apart. Beyond touch, the iPhone
has several additional built-in sen-
the following paper
by a team from
introduces a very
novel way to build
modify a flat-panel
display to sense
sory modalities, including a microphone, camera, accelerometer, and a
GPS receiver and compass. Relative to
a modern desktop computer, it is sensory rich and output poor.
The following paper by a team from
Microsoft Research introduces a very
novel way to build a multitouch interface—the ThinSight system. They
modify a flat-panel display to sense
touch directly. Previously, touch was
sensed indirectly; for example, by
mounting a camera to look at the surface of the display. The camera-based
approaches require a fairly large space,
have problems with occlusion, and are
difficult to calibrate. In the ThinSight
system, the LED backlight that drives
the display is modified to include infrared LEDs and sensors interspersed
amid the visible light emitters.
The display surface can both emit
light and sense position. Distributing sensors throughout the display
substrate yields a compact, efficient
This paper is important because
it proposes an innovative design that
addresses a long-standing problem.
However, there is much more work to
do in this area. The authors have added the sense of touch to the display,
but like real touch sensors it has limited resolution and the sensed object
must be in contact with the display. In
Pierre Wellner’s pioneering work on
the digital desk, the computer sensed
remote hand positions as well as the
position, type, and content of objects
on the desk. Unfortunately, Wellner’s
system involved bulky cameras and
Hopefully HCI researchers will expand on the innovative sensing strategy
proposed in this paper. We want compact interactive devices that have rich
sensory capabilities including touch,
sight, and hearing.
Pat hanrahan is the CANON Professor of Computer
Science and electrical engineering in the Computer
Graphics Laboratory at Stanford University, Stanford, CA.