A key reason why even advanced
technologies that use machine learning to train and ingest the many variations of sign language do not work as
seamlessly as a live signer is due to the
lack of participation of deaf or hard-of-hearing people in the development process, thereby missing key linguistic, stylistic, and usability concerns of signers.
“That’s a huge problem,” Rosen-
blum says. “Several companies do have
deaf and hard-of-hearing engineers, sci-
entists, or other highly trained profes-
sionals, but this is more of an exception
than the rule.”
Perhaps the biggest reason why
technology for the deaf is not as func-
tional as it could be is because tech-
nology is driven, in large part, by the
lack of regulatory requirements cover-
ing non-signer to signer communica-
tions, and vice versa. Improvements
in accessibility within the television
and video industries was driven by
regulation, and may serve as an exam-
ple of how real-time communications
may eventually be regulated.
“For individuals with hearing loss,
videos need captioning or a transcript
of what is verbally communicated,”
says Nancy Kastl, Testing Practice Di-
rector at the digital technology consult-
ing firm, SPR. “For individuals with vi-
sion loss, the captioning or transcript
(readable by a screen reader) should in-
clude a description of the scenes or ac-
tions, if there are segments with music
only or no dialogue.”
Likewise, Rosenblum says that
“many of the best advances in technol-
ogy for deaf and hard of hearing people
have been because laws demanded
them,” noting that the text and video
relay systems provided by telecommu-
nications companies were very basic
and voluntary prior to the adoption
of the Americans with Disabilities Act
(ADA) of 1990.
Furthermore, the closed captioning
of television content for the hearing impaired “in the original analog format was
mandated by the Telecommunications
Act of 1996, and expanded to digital access online through the 21st Century
Communications and Video Accessibility Act of 2010, as well as by the lawsuit
of NAD v. Netflix in 2012,” Rosenblum
says, noting that the suit required Netflix
to ensure that 100% of its streaming content is made available with closed captions for the hearing impaired.
Further Reading
Cooper, H., Holt, B., and Bowden, R.
Sign Language Recognition,
Visual Analysis of Humans, 2011
http://info.ee.surrey.ac.uk/Personal/H.M/
research/papers/SLR-LAP.pdf
Erard, M.
Why Sign Language Gloves Don’t Help Deaf
People, The Atlantic, November 9, 2017,
https://www.theatlantic.com/technology/
archive/2017/11/why-sign-language-gloves-
dont-help-deaf-people/545441/
25 Basic ASL Signs For Beginners,
American Sign Language Institute,
Oct. 22, 2016, https://www.youtube.com/
watch?v=Raa0vBXA8OQ
Keith Kirkpatrick is principal of 4K Research &
Consulting, LLC, based in Lynbrook, NY, USA.
© 2018 ACM 0001-0782/18/12 $15.00
“The capacity or
development of
computers to ‘read’
the zillions of
variations of rendering
ASL is extremely
difficult, and probably
will take a decade
to accomplish.”
Researchers are experimenting
with artificial intelligence (AI)
software that can tell whether you
suffer from Parkinson’s disease,
schizophrenia, depression, or
other mental disorders, from
watching the way you type.
In a University of Texas study
published earlier this year, for
example, researchers were able
to identify typists suffering from
Parkinson’s disease by capturing
how study subjects worked a
keyboard over time, then running
that data through pattern-finding
AI software.
“We envision a future where
keystroke and touch-screen
tracking will become a standard
metric in any digital device and
added to your electronic medical
record,” says Teresa Arroyo
Gallego, a co-author of the study.
Meanwhile, researchers
involved in similar work at
Palo Alto, CA-based healthcare
innovation company Mindstrong
Health say they’ve been able
to diagnose schizophrenia by
analyzing typing keystroke
patterns, as well looking closely
at scrolling, swiping, and tapping
behaviors.
“We believe that digital
biomarkers are the foundation
for measurement-based mental
health care, for which there is a
massive unmet patient need,”
says Mindstrong Health founder
and CEO Paul Dagum.
Researchers at Hillsborough,
CA-based NeuraMetrix are using
keystroke analysis to detect
afflictions including Alzheimer’s
disease, depression, Huntington’s
disease, and REM sleep disorder.
In the Texas study, researchers
say they engineered software
that could capture down to the
millisecond how long a typist
held down a key before moving to
the next key, as well as capturing
‘flight time’—the number of
milliseconds it takes a typist to
actually move a finger from one
key to the next.
Armed with that data,
diagnosing typists with
Parkinson’s was just a matter
of training the AI software to
find typing patterns shared by
people suffering from the disease,
then running new data through
the trained AI software to find
matches, which they did.
A great advantage of the Texas
researchers’ diagnostic method
is its sheer convenience. Patients
can work with their smartphones
and other digital devices as usual,
and software installed on those
devices will transmit their use-
history over the Internet to the
computers of researchers.
Even better, the Texas
researchers’ work appears to
diagnose Parkinson’s disease
much earlier than usual.
Says Timothy Ellmore, an
associate professor in psychology
at the City College of New York,
“The data from these keyboard
tracking techniques need further
validation to objectively track
progression of Parkinson’s signs.”
Still, he says, “Looking ahead,
the tool could be really useful in
augmenting the current tools
available to clinicians.”
—Joe Dysart is an Internet
speaker and business consultant
based in Manhattan, NY, USA.
ACM News
Detecting Illness by Watching You Type