such factors as publication counts,
measuring output, and citation counts,
measuring impact (and derived measures such as indexes, discussed next).
While numeric criteria trigger
strong reactions,c alternatives have
problems too: peer review is strongly
dependent on evaluators’ choice and
availability (the most competent are
often the busiest), can be biased, and
does not scale up. The solution is in
combining techniques, subject to human interpretation:
5. Numerical measurements such as
publication-related counts must never
be used as the sole evaluation instrument. They must be filtered through
human interpretation, particularly to
avoid errors, and complemented by peer
review and assessment of outputs other
Measures should not address volume but impact. Publication counts
only assess activity. Giving them any
other value encourages “write-only”
journals, speakers-only conferences,
and Stakhanovist research profiles favoring quantity over quality.
6. Publication counts are not adequate indicators of research value. They
measure productivity, but neither impact nor quality.
Citation counts assess impact. They
rely on databases such as ISI, CiteSeer,
ACM Digital Library, Google Scholar.
They, too, have limitations:
˲ Focus. Publication quality is just
one aspect of research quality, impact
one aspect of publication quality, citations one aspect of impact.
˲ Identity. Misspellings and mangling of authors’ names lose citations.
Names with special characters are particularly at risk. If your name is Kröten-fänger, do not expect your publications
to be counted correctly.
˲ Distortions. Article introductions
heavily cite surveys. The milestone
article that introduced NP-completeness has far fewer citations than a
˲Misinterpretation. Citation may
imply criticism rather than appreciation. Many program verification arti-
c D. Parnas, “Stop the Numbers Game,”
Commun. ACM 50, 11 (Nov. 2007), 19–21; available
at http://tinyurl.com/2z652a. Parnas mostly
discusses counting publications, but deals
briefly with citation counts.
An issue of concern to
is the tendency to
databases that do not
adequately cover cS.
cles cite a famous protocol paper—to
show that their tools catch an equally
famous error in the protocol.
˲ Time. Citation counts favor older
˲ Size. Citationcountsareabsolute;im-pact is relative to each community’s size.
˲ Networking. Authors form mutual
˲ Bias. Some authors hope (
unethically) to maximize chances of acceptance by citing program committee
The last two examples illustrate
the occasionally perverse effects of
assessment techniques on research
The most serious problem is data
quality; no process can be better than its
data. Transparency is essential, as well as
error-reporting mechanisms and prompt
response (as with ACM and DBLP):
7. Any evaluation criterion, especially quantitative, must be based on clear,
This remains wishful thinking for
major databases. The methods by
which Google Scholar and ISI select
documents and citations are not published or subject to debate.
Publication patterns vary across
disciplines, reinforcing the comment
that we should not judge one by the
rules of another:
8. Numerical indicators must not
serve for comparisons across disciplines.
This rule also applies to the issue
(not otherwise addressed here) of evaluating laboratories or departments
rather than individuals.
cS coverage in major Databases
An issue of concern to computer scientists is the tendency to use publica-
computation and control,
San francisco, ca,
contact: Paulo tabuada,
the 8th international
conference on information
Processing in Sensor networks,
San francisco, ca,
contact: rajesh Gupta,
design, automation and test in
contact: benini luca,
the 18th international World
Wide Web conference,
contact: Gonzalo leon,
international conference on the
foundations of digital Games,
Port canaveral, fl,
contact: emmet James
Western canadian conference
on computing education,
burnaby, bc canada,
contact: diana cukierman,
14th international conference
on animation, effects, Games
and digital media,
contact: thomas haegele,
acm 2009 international
conference on Supporting
Sanibel island, fl,
contact: erling carl havn,
APriL 2009 | voL. 52 | no. 4 | communicAtionS of the Acm