practice
Doi: 10.1145/1897816.1897836
Article development led by
queue.acm.org
Automated usability tests can be valuable
companions to in-person tests.
By JuLiAn HARty
finding
usability
Bugs with
Automated
tests
ideaLLY, aLL soFTWare should be easy to use and
accessible for a wide range of people. However, even
software that appears to be modern and intuitive
often falls short of the most basic usability and
accessibility goals. Why does this happen? One
reason is that sometimes our designs look appealing
so we skip the step of testing their usability and
accessibility—all in the interest of speed, reducing
costs, and competitive advantage.
Even many large-scale applications from Internet
companies present fundamental hurdles for some
groups of users, and smaller sites are no better.
We therefore need ways to help us discover these
usability and accessibility problems
efficiently and effectively.
Usability and accessibility are two
ways of measuring software quality. This article covers several ways in
which automated tests can help identify problems and limitations in Web-based applications, where fixing them
makes the software more usable and/
or accessible. The work complements,
rather than replaces, other human usability testing. No matter how valuable
in-person testing is, effective automation is able to increase the value of
overall testing by extending its reach
and range. Automated tests that are
run with minimal human intervention
across a vast set of Web pages would
be impractical to conduct in person.
Conversely, people are capable of
spotting many issues that are difficult
to program a computer to detect.
Many organizations do not do any
usability or accessibility testing at
all; often it’s seen as too expensive,
too specialized, or something to address after testing all the “
functionality” (which is seldom completed
because of time and other resource
constraints). For these organizations,
good test automation can help in several ways. Automated tests can guide
and inform the software development
process by providing information
about the software as it is being written. This testing helps the creators
of the software fix problems quickly
(because they have fast, visible feedback) and to experiment with greater
confidence. It can also help identify
potential issues in the various internal releases by assessing each release
quickly and consistently.
Some usability experts find the
idea of incorporating automated tests
into their work alien, uncomfortable, or even unnecessary. Some may
already be using static analysis tools
such as Hera and Bobby to check for
compliance with WCAG (Web Content Accessibility Guidelines; http://
www.w3.org/TR/WCAG20/) and Section 508 ( http://www.access-board.
gov/sec508/guide/ 1194.22.htm), but