P
H
O
T
O
B
Y
D
E
N
N
I
S
W
I
S
E
/
U
N
I
V
E
R
S
I
T
Y
O
F
W
A
S
H
I
N
G
T
O
N
solution. Similarly, there is a desire to
allow real-time, audio-based speech
or text to be delivered to a person who
is deaf, often through sign language,
via a portable device that can be carried and used at any time.
Nonetheless, sign languages, such
as the commonly used American Sign
Language (ASL), are able to convey
words, phrases, and sentences through
a complex combination of hand movements and positions, which are then
augmented by facial expressions and
body gestures. The result is a complex
communication system that requires
a combination of sensors, natural language processing, speech recognition
technology, and machine learning
technology, in order to capture and
process words or phrases.
A NURSE ASKS a patient to describe her symptoms. A fast-food worker greets a customer and asks for his order. A tourist asks a
police officer for directions to a local
point of interest.
For those with all of their physical
faculties intact, each of these scenarios can be viewed as a routine occurrence of everyday life, as they are
able to easily and efficiently interact
without any assistance. However,
each of these interactions are significantly more difficult when a person
is deaf, and must rely on the use of sign
language to communicate.
In a perfect world, a person that is
well-versed in communicating via sign
language would be available at all times
and at all places to communicate with
a deaf person, particularly in settings
there is a safety, convenience, or legal
imperative to ensure real-time, accurate
communication. However, it is exceptionally challenging, from both a logistical and cost perspective, to have a signer
available at all times and in all places.
That’s why, in many cases, sign language interpreting services are provided by Video Remote Interpreting,
which uses a live interpreter that is
connected to the person needing sign
language services via a videoconfer-encing link. Institutions such as hospitals, clinics, and courts often prefer
to use these services, because they can
save money (interpreters not only bill
for the actual translation service, but
for the time and expenses incurred
traveling to and from a job).
However, video interpreters some-
times do not match the accuracy of live
interpreters, says Howard Rosenblum,
CEO of the National Association of the
Deaf (NAD), the self-described “pre-
mier civil rights organization of, by, and
for deaf and hard of hearing individuals
in the United States of America.”
“This technology has failed too of-
ten to provide effective communica-
tions, and the stakes are higher in hos-
pital and court settings,” Rosenblum
says, noting that “for in-person com-
munications, sometimes technology
is more of an impediment than a so-
lution.” Indeed, technical issues such
as slow or intermittent network band-
width often make the interpreting ex-
perience choppy, resulting in confu-
sion or misunderstanding between
the interpreter and the deaf person.
That’s why researchers have been
seeking ways in which a more effec-
tive technological solution or tool
might handle the conversion of sign
language to speech, which would be
useful for a deaf person to communi-
cate with a person who does not un-
derstand sign language, either via an
audio solution or a visual, text-based
Technology for the Deaf
Why aren’t better assistive technologies
available for those communicating using ASL?
Technology | DOI: 10.1145/3283224 Keith Kirkpatrick
These prototype SignAloud gloves translate the gestures of American Sign Language into
spoken English.