haptic stimuli, and feed the information to the user through another sensory channel. While the utility of these
devices has long been debated in the
vision- and hearing-impaired communities, recent advances suggest
that sensory substitution technologies
are finally starting to deliver on their
promise.
A Rich History
The first sensory substitution devices
originated long before computing
machines and smartphone apps. The
white cane widely used by people who
are blind or have low vision alerts users
to the presence of obstacles through
tactile feedback. Similarly, Braille is a
way of converting visual text to felt or
tactile text. But a major technological
shift has resulted from the efforts of
neuroscientist Paul Bach-y-Rita, who
developed a prototype device that con-
verts video into tactile feedback.
Today, sensory substitution devices
come in a variety of forms. The Brain-
Port, for example, translates visual in-
formation from a forehead-mounted
camera into tactile feedback, deliver-
ing stimuli through 400 electrodes
on a thumbprint-sized pad that us-
ers place on their tongue. Other aids
include the vOICe, which translates
camera scans of an environment into
audible soundwaves, allowing users to
hear obstacles they cannot see.
Although these devices use different approaches, they are capitalizing
on the same general principle. “Most
of the hard computing work is being
done in the brain,” explains experimental psychologist Michael Proulx
of the University of Bath. “What we’re
doing is relying on the brain’s ability
to take information in any sensory format and make sense of it independent
of that format.”
Practical Uses
Neuroscientist David Eagleman and
his colleagues at Neosensory, a Silicon Valley startup, are developing a
new device, the Buzz, that translates
ambient sounds such as sirens or
smoke alarms into distinct patterns
of vibrations that pulse through and
across the device’s eight motors. A
smartphone microphone picks up the
sound, then passes it through an app
that mimics the role of the inner ear.
One algorithm separates the sound
into its component frequencies (as
our own ears do) while others cancel
out unrelated noise, such as the hum
of an air conditioner. The app then
transforms this change in frequencies
over time into a pattern of vibrations
that alters every 20 milliseconds, roll-
ing through or pulsing on the Buzz.
“With a siren, you feel it going back
and forth on your wrist because there
are different frequencies involved,”
Eagleman explains. “A knock on the
door is easy. You feel the knock on
your wrist.”
The Buzz and its predecessor, a
more robust wearable vest, are also de-
signed to be affordable: the projected
price of the wrist-worn version should
be less than $400. Cost is a major con-
cern because sensory substitution
devices are not reimbursed through
health insurance in the U.S., and stud-
ies have found that people with dis-
abilities often have lower rates of em-
ployment and income, and may not
be able to afford technologies like the
BrainPort, which retails for $10,000.
“For people with sensory disabili-
ties, none of these technologies are
covered” by insurance, says Deborah
Cook, Washington Assistive Technol-
ogy Act Program technical advisor and
director of the Older Blind Indepen-
dent Living Program at the University
of Washington. “You can get a wheel-
chair paid for, but you can’t get a new
visual or auditory device reimbursed.”
Cook also argues that many sensory
substitution devices are too focused
on navigation. But IBM computer sci-
entist Chieko Asakawa believes there
is still an unmet need in this space,
and that such technologies have the
potential to allow people who are
blind to explore unfamiliar areas such
as schools, train stations, airports,
and more. “It’s not fun if I go to a shop-
ping mall by myself, for example,” says
Asakawa, who lost her sight at age 14.
“If there are many people in the mall,
it’s very difficult to move around with
the white cane.”
Asakawa and her collaborators
at IBM Tokyo and Carnegie Mellon
University have developed a new sys-
tem, NavCog, that deploys bluetooth
beacons throughout interior spaces
such as academic buildings and, in
one case, a public shopping mall. The
beacons connect to a smartphone
app, which guides the user via voice
assistance. “In the mall,” she ex-
plains, “I can find out which shop is
next to me while I’m walking, such
as a coffee shop on the left or a sushi
restaurant on the right. That’s useful
information.”
Siri’s Shortcomings
Devices that help individuals enhance
their productivity in the workplace are
also critical. Computer scientist Matt
Huenerfauth of the Linguistic and
Assistive Technologies Laboratory at
the Rochester Institute of Technology (RIT) is working with researchers
from the National Technical Institute
for the Deaf (NTID) to see if Automatic
Speech Recognition (ASR) technology of the sort that powers Siri, Alexa,
and Cortana could be used to generate
captions in real time during meetings.
Often, people who are deaf or hard of
hearing either skip business meetings
and wait for summaries from other
attendees, or sit through the conferences and miss numerous side conversations. However, ASR technology
is imperfect, and a real-time captioning system with errors in the text can
be confusing. Huenerfauth’s team is
investigating whether highlighting
words that the ASR is not confident it
recognized correctly—using italicized
fonts—will help users understand
which fragments of a transcript they
can trust.
Computer scientist Raja Kushalnagar of Gallaudet University, along
with colleagues from the University of
“What we’re doing
is relying on
the brain’s ability
to take information
in any sensory format
and make sense
of it independent
of that format.”