events to produce entire auditory worlds. As more
personal computers were produced with built-in
surround sound, games took advantage of this
technology to fully spatialize audio, resulting in
games that cannot be played without sound.
Since 1992 an International Conference on
Auditory Display (ICAD) has occurred annually.
The first conference’s organizer described ICAD
as having grown “out of a desire to pull together
researchers in auditory data display and to fill this
mutual need for sharing results, stimulating new
ideas, and identifying the field as a whole.” Judging
from the published proceedings, which included
an appendix of informal comments regarding the
conference, some attendees were disappointed in
the state of the field. Sarah Bly stated that “it was
discouraging that we’ve not made greater advances
in the area of sonification,” and Bill Buxton wrote,
“I continue to be disappointed at the lack of science and experimental/human validation of so
much of the work.” By 1997, however, members of
ICAD wrote that “[t]he question is no longer whether it works, or even whether it is useful, but rather,
how one designs a successful sonification.” The
discussion thus moved from questions of whether
or not sound had a legitimate functional role to a
focus on design methods. ICAD continues to serve
as the primary forum for presenting research in
the area [ 5].
The Macintosh and Windows operating systems
of the 1990s began allowing users to customize the
sounds generated by desktop interactions. Users
could choose the sounds associated with various events and (perhaps most important) disable
sounds completely. Interestingly, few of the sounds
produced by the Apple and Microsoft operating
systems appear to have been designed to take full
advantage of the published research. Standard
interface sounds generally consist of semi-abstract
beeps and clicks to alert the user of system and
application errors and events (such as clicking on
an icon), but they rarely provide any information
that is not readily available visually.
Since the mid-1990s, there has been a steady
stream of conference papers related to sound and
computing but few major publications. One notable
essay is Stephen Brewster’s chapter in the 2008
Human-Computer Interaction Handbook, which provides
an extremely useful and comprehensive review of
research related to sound and interface [ 6].
Future work in sound is likely to focus on mobile
devices, which have unique constraints that may
make them well suited to sound-enhanced (or
purely auditory) interfaces. Brewster concludes
his review of nonspeech sound research by stating
that “nonspeech sound has a large part to play in
the near future … in mobile computing devices.”
Bill Gaver says, “It seems to me that there’s lots
of opportunity for exploring sound in design still.
Certainly, auditory feedback plays an important, if
fairly crude, role in mobile devices.”
Unfortunately, generalized guidelines regarding
the use of sound have yet to emerge. This lack of
design theory is all the more unfortunate because
in the past 10 years, technology has advanced to
the point where sampled and synthesized sounds
can be easily included in almost any product at low
cost. An ever-increasing range of products include
myriad sounds, from mobile phones to washing
machines, and thus an ever-increasing number of
practicing interaction designers make decisions
about sounds in products.
Designers should begin considering the sounds
of their products at the concept-generation stage.
If sound is not considered until late in the design
process, it will not play a larger role than at present, and it is unlikely to prove any more useful
than it has in the past.
[ 5] Kramer, G., ed.
and Auditory Interfaces,
Santa Fe Institute
Studies in the Sciences
of Complexity, 557-58.
Reading, MA: Addison
Wesley Longman, 1994.
and Kramer, G. et al.
Status of the Field and
Arlington, VA: National
Special thanks to Bill Gaver, Lenny Shrieber, Peter Sisk,
Mei Lin, Bill English, and an anonymous editor for their
help with and contributions to this article.
ABOUT THE AUTHORS Paul Robare is currently pursuing a master’s in interaction design at
Carnegie Mellon University. His interests range
from nonspeech sound and multimodal interfaces
to service strategy, experience design, and games.
He maintains a portfolio at www.paulrobare.com.
Jodi Forlizzi is an associate professor of design
and human-computer interaction and the A. Nico
Habermann Chair of Computer Science at Carnegie
Mellon University, and an interaction designer contributing to design theory and practice. Her theoretical research examines theories of experience,
emotion, and social product use as they relate to interaction design.
Other research and practice centers on notification systems ranging
from peripheral displays to embodied robots, with a special focus
on the social behavior evoked by these systems. Jodi was trained
as an illustrator at Philadelphia College of Arts and as an interaction
designer at the Carnegie Mellon School of Design. She holds a self-defined Ph.D. in design in HCI from Carnegie Mellon.
[ 6] Brewster, S. “
Output.” In The
edited by Andrew Sears
and Julie A. Jacko. 2d
ed. Boca Raton, FL:
CRC Press, 2007. For
an exhaustive survey
of research into sound
and interfaces prior
to the mid-1990s, see
Bill Buxton’s “Speech,
Language & Audition,”
chap. 8 in Readings
in Human Computer
the Year 2000, edited
by R.M. Baecker, J.
Grudin, W. Buxton, and
S. Greenberg, Morgan
January + February 2009
© 2009 ACM 1072-5220/09/0100 $5.00