put methods. For the keyboard navigation tests, though, generating native events is essential to establishing
the fidelity of the tests.
WebDriver works with the majority of popular desktop Web browsers,
such as Firefox, Internet Explorer,
Opera, and even includes the Web
browsers on Android, iPhone, and
BlackBerry devices. This broad reach
means we can run our tests on the
most popular browsers, which helps
increase the usefulness of the tests.
finding Layout issues
Layout problems are an area that can
adversely affect a user’s perception
of an application and may indirectly
reduce its usability by distracting or
frustrating users. There are numerous classes of problems that can
cause poor layout, including quirks
in a particular Web browser, mistakes
made by the developers and designers, and poor tools and libraries. Localizing an application from English
to languages such as German, where
the text is typically more voluminous,
is a reliable trigger for some layout
issues. Many of these problems have
been challenging to detect automatically, and traditionally we have relied
on humans to spot and report them.
This changed in 2009 when I met
Michael Tamm, who created an innovative approach that enables several
types of layout bugs to be detected automatically and simply. For example,
one of his tests programmatically toggles the color of the text on a page to
white and then black, taking a screenshot in both cases. The difference between the two images is generated,
which helps identify the text on the
page. Various algorithms then detect
the horizontal and vertical edges on
the Web page, which typically represent elements such as text boxes and
input fields. The difference of the text
is then effectively superimposed on
the pattern of edges to see if the text
meets, or even overlaps, the edges. If
so, there is a potential usability issue
worth further investigation. The tests
capture and annotate screenshots;
this allows someone to review the potential issues quickly and decide if
they are serious.
For existing tests written in Web-
Driver, the layout tests were enabled
by adding a couple of lines of source
code. For new automated tests, some
code needs to be written to navigate
to the Web page to be tested before
running the tests. (See http://code.
for more information, including a
video of Tamm explaining his work,
sample code, and so on.)
Our work to date has been useful, and
I expect to continue implementing
test automation to support additional
heuristics related to dynamic aspects
of Web applications. WebDriver includes support for touch events and
for testing on popular mobile phone
platforms such as iPhone, Android,
and Blackberry. WebDriver is likely to
need some additional work to support
the matrix of tests across the various
mobile platforms, particularly as they
are frequently updated.
We are also considering writing
our tests to run interactively in Web
browsers; in 2009, a colleague created a proof of concept for Google’s
Chrome browser. This work would reduce the burden of technical knowledge to run the tests. The final area of
interest is to add tests for WAI-ARIA
(Web Accessibility Initiative-Accessi-ble Rich Internet Applications; http://
and for the tests described at http://
We are actively encouraging sharing of knowledge and tools by making the work open source, and others
are welcome to contribute additional
tests and examples.
Automated testing can help catch
many types of problems, especially
when several techniques and approaches are used in combination.
It’s good to keep this in mind so we
know where these automated tests fit
within our overall testing approach.
With regard to the automated tests
we conducted on the Google sites, the
ROI for the amount of code written
has justified the work. Running the
tests discovered bugs that were fixed
in several frontline Google properties
and tools. Conservatively, the page-
weight of many millions of Web re-
quests has been reduced because of
problems discovered and fixed using
this test automation. Keyboard navi-
gation has also been improved for
those who need or prefer using it.
Thank you to Google for allowing the
original work to be open sourced,
to eBay for supporting the ongoing
work, to Jonas Klink for his contributions, and to various people who
contributed to the article and offered
ideas. Please contact the author if you
are interested in contributing to the
project at firstname.lastname@example.org.
Steve Krugg’s work is an excellent
complement to automated tests. He
has written two books on the topic:
Rocket Surgery Made Easy (http://www.
html) and Don’t Make Me Think, of
which three chapters on user testing are available to download for free
from http://www.sensible.com/sec-ondedition/ index.html.
Case Study: UX Design
and Agile: A natural Fit?
Orchestrating an Automated Test Lab
Too Darned Big to Test
Julian harty is the tester at large at eBay, where he is
working to increase the effectiveness and efficiency of
testing within the organization. He is passionate about
finding ways to adapt technology to work for users, rather
than forcing users to adapt to (poor) technology. Much of
his material is available online. He is a frequent speaker
and writes about a range of topics related to technology,
software testing and accessibility, among others.
© 2011 ACM 0001-0782/11/0200 $10.00