This account of ways in which computers have interacted with us via our ears is a welcome first,
an over-the-transom, unsolicited Timelines contribution. —Jonathan Grudin
Sound in Computing:
A Short History
Paul Robare
Carnegie Mellon University | paulrobare@cmu.edu
Jodi Forlizzi
Carnegie Mellon University | forlizzi@cs.cmu.edu
[ 1] “The Music
played by CSIRAC.”
University of Melbourne,
Melbourne School
of Engineering,
Department of
Computer Science and
Software Engineering,
15 May 2006. <http://
www.csse.unimelb.edu.
au/dept/about/csirac/
music/ music.html>.
January + February 2009
John Cage is said to have once sat in an anechoic
chamber for some time. Upon exiting, Cage
remarked to the engineer on duty that after some
time he was able to perceive two discrete sounds,
one high pitched and one low. The engineer then
explained that the high-pitched sound was his nervous system and the low his circulatory system.
There really is no escaping sound.
Recently, one of us embarked on a similar
experiment with our desktop computer. A surprising number of sounds emanated from the machine:
the whirr of the fans, the clicking of the drives, and
a whole suite of sounds from the interface, which
had previously gone unnoticed. Even more surprising, many of these sounds play informational roles.
For example, the fans speed up when the processor
is doing double time; the quality of sound changes
just before the graphic interface provides an alert.
Though it rarely receives much formal consideration, sound has been a part of computing for as
long as digital computers have existed. These days,
many designers must make decisions regarding
the use of sound in products at some point in their
career, but there are few resources regarding how
sounds can and should be used. As a result, sound
is generally underutilized by designers and under-appreciated by users. To help establish a framework for understanding sound in digital products,
this article briefly traces the historical use of
sound in computing.
The 1940s, ’50s and ’60s—
Big Systems and Constant Sounds
Many of the earliest computers were equipped with
a speaker (known as a “hooter”) that could sonify
the machine’s operations. The hooters were wired
directly into a mainframe’s accumulator (or other
likely spot); they produced rhythmic noises that
engineers and operators could passively monitor.
In the event of a bug, the noises would change. The
interaction paradigm used was thus quite similar
to that of a car engine: The user passively monitors
a steady-state noise for any change in sound that
might signal malfunction.
It was not long before smart people figured out
that if it could make noise, it could make music.
In 1951 the Mk1 (a hulking mainframe in Sydney,
Australia), was used to give the world’s first public
computer music performance. According to the
University of Melbourne, “The sound production
technique used on the CSIR Mk1 was as crude as is
possible to imagine on a computer …. However, this
occurred when there were no digital-to-analog converters, there was no digital audio practice and little
in the way of complete digital audio theories [ 1].”
Musicians, of course, cannot leave a new instrument alone once discovered and thus continued to
invent new ways of producing music with computers. In 1964 or 1965, for instance, undergraduate
students at Reed College discovered a way to coax
music out of the IBM 1620 and later the faster IBM
1130 (a table-sized machine with a whopping 8K
of memory and a punch card only interface). The
computers generated RF interference that could be
picked up and sonified by a nearby radio. A student
named Peter Langston developed a way to use this
“feature” to produce music; and a friend named
Lenny Schrieber wrote an algorithm to produce a
specific frequency N by writing code that branched
to a new cycle every k/N cycles (k being some
constant). Langston recalls that during this time a
duet between a classical violinist and the computer
was performed for a local news program.
The 1970s—Human Factors and Sound Aesthetics