might be widespread and prevalent
throughout the industry.
My main goal was to determine
whether we could create automated
tests that would help identify potential problems that may affect
quality-in-use for groups of users in terms of
dynamic use of the software. As mentioned earlier in this article, several
standards (for example, Section 508)
and guidelines (for example, WCAG)
aim to help address basic problems
with accessibility, and a plethora of
software tools are available to test for
Section 508 and WCAG compliance.
None, however, seemed to focus on
quality-in-use of the applications.
Furthermore, my work needed to
provide positive ROI (return on investment), as well as be practical and
testing Keyboard navigation
One facet of usability and accessibility testing is keyboard input and
navigation (as opposed to relying on
a mouse or a touch screen). I decided
to focus on finding ways to test keyboard navigation using automated
software tools. The work started with
a simple but effective heuristic: when
we tab through a user interface, we
should eventually return to where we
started—typically, either the address
bar in the Web browser or the input
field that had the initial focus (for
example, the search box for Google’s
The initial test consisted of about
50 lines of Java code. It provided a
highly visible indicator of the navigation by setting the background of
each visited element to orange; each
element was also assigned an ascending number representing the number
of tabs required to reach that point.
The screenshot in Figure 1 shows an
example of navigating through the
Google Search results. The tab order
first works through the main search
results; next, it tabs through the ads
on the right, and then the column on
the left; the final element is the Advanced Search link, which is arrived
at after approximately 130 tabs! The
code tracks the number of tabs, and if
they exceed a specified value, the test
fails; this prevents the test from running indefinitely.
This test helped highlight several
one aim of this
article is to
simply to try
to see if they help
uncover issues that
may be worth fixing.
key issues such as black holes, Web elements that “swallow” all keystrokes.
It also helped identify Web elements
that were unreachable by tabbing
through the page. Our success was
measured by the percentage of bugs
fixed and the reduction in keystrokes
needed to navigate a user interface.
Several GWT frameworks include
custom elements such as buttons.
handler to these elements to handle
specific keystrokes. We discovered a
was designed to handle). This meant
that once a user navigated to that button, he or she was unable to leave using the keyboard (tab characters were
being silently discarded).
We used a heuristic in the automated test that assumed that if a user
pressed the Tab key enough times,
the user should eventually return to
where he or she started in the user
interface. The test included a “
maximum number of tabs” parameter. We
set this to three times the number of
Web elements on the Web page as
a balance between making sure we
didn’t “run out” of tabs before reaching the end of a legitimate page and
the test continuing forever if we didn’t
cap the number of keystrokes. If the test
failed to return to the initial element, it
failed. This test was able to detect the
by changing the code in the underlying
custom GWT framework.
The second problem we discovered was a “new message” button that
was unreachable using the keyboard.
This was embarrassing for the development team, as they prided themselves on developing a “power-user”
interface for their novel application.
One aspect of the test was that it set
the background color of each Web
element it visited to orange. We were
able to spot the problem by watching
the tests running interactively and
seeing that the “new message” button
was never highlighted. We were able
to spot similar problems by looking at
screenshots saved by the test automation code (which saved both an image
of the page and the DOM (document
object model) so we could visualize