Touchscreens, Meet Buttons
By Chris Harrison and Scott Hudson
While touchscreens allow extensive programmability and have become ubiquitous in today’s gadgetry, such configurations lack the tactile sensations and feedback that physical buttons provide. As a result, these devices require more attention to use than their button-enabled counterparts. Still, the displays
provide the ultimate interface flexibility and thus afford a much larger design space to application developers.
But from the user’s point of view, touchscreens require direct visual
attention. This makes them dangerous in contexts like driving, and
potentially disruptive in casual social situations. Furthermore, even
with visual attention, touchscreen interfaces tend to be slower and more
error prone than gadgets with keyboards, buttons, knobs, and the like.
The tactile sensations produced by physical buttons often make
them easier to find and use; thus, they require less attention from the
user. Pumping the music in a car while driving, for instance, doesn’t
require much more than a quick reach and turn of knob, or a few taps
of a button. Nevertheless, most buttons are static, both in appearance
and tactile expression, meaning a single button must be used for many
Our goal is to devise a display technique that occupies the space
between these two extremes, offering some graphical flexibility while
retaining some of the beneficial tactile properties. To achieve this, we
require that graphics be displayed without interference from hands or
the tactile control and actuation elements.
The screen has to sense user input without preventing tactile
deformation or hiding graphics. Finally, the screen has to provide support for tactile expression beyond simple on/off state changes.
What We Built
Our design consists of one or more air chambers that are created by
layering several specially cut pieces of clear acrylic. On top of this, we
drape a thin sheet of translucent latex, held in place with a specifically
structured pattern of adhesive. Through pneumatic actuation, we can
create dynamic physical features and allow a small set of distinct interface elements to occupy the same physical space at different times.
The fabrication is straightforward. We are able to assemble working prototypes with complex features in under an hour using a laser
cutter. The displays rely on inexpensive materials: acrylic, glue, and
latex. Air chambers can be negatively or positively pressurized with a
small and inexpensive pump, allowing for easy actuation.
Applying clear acrylic to such a display allows the image to be rear
projected. Our design doesn’t suffer occlusion from user input. This
novel approach enables us to employ diffused infrared illumination
and an infrared camera for multi-touch sensing.
Chris Harrison ( email@example.com)is a PhD student in the Human-Computer Interaction Institute at Carnegie Mellon University. His
research interests primarily focus on novel input methods and interaction technologies, especially those that leverage existing hardware in
new and compelling ways. Over the past four years, he has worked on
several projects in the area of social computing and input methods at
IBM Research, AT&T Labs, and most recently, Microsoft Research.
Scott Hudson ( firstname.lastname@example.org)is a professor in the Human-Computer Interaction Institute within the School of Computer Science
at Carnegie Mellon University, where he directs the HCII PhD program.
His research interests have covered a wide range of topics within the area
of user interface software and technology, though his work has always
revolved around the invention and building of things that lead to a better user experience, often indirectly through tools for the UI developer.