But one can clearly understand the basis for each and inspect all or most of
the included data. These GOTO rankings are a far cry from the products of
most commercial rankers.
Call to Action
We call on all CS departments and colleges to boycott reputation-based and
non-transparent ranking schemes,
including but not limited to U.S. News
and World Report:
˲ Do not fill out their surveys. Deprive these non-GOTO rankings of air,
at least for computer science.
˲ Do not promote or publicize the
results of such ranking schemes in departmental outlets.
˲ Discourage university administrators from using reputation-based and
˲ Encourage the use of GOTO Rankings such as CSrankings and CSmetrics as better alternatives.
1. Davidson, S. CRA statement on U.S. News and World
Report rankings of computer science universities,
2. Dijkstra, E. Go to statement considered harmful.
Commun. ACM 11, 3 (Mar. 1968), 147–148.
3. Sinha, A. et al. An overview of Microsoft Academic
Service (MAS) and applications. In Proceedings of the
24th International Conference on World Wide Web,
4. Vardi, M. Y. Academic rankings considered harmful!
Commun. ACM 59, 9 (Sept. 2016),
5. Wang, K. The knowledge Web meets big scholars. In
Proceedings of the 24th International Conference on
World Wide Web, 2015, 577–578.
Emery Berger ( firstname.lastname@example.org) is a Professor in
the College of Information and Computer Sciences at
the University of Massachusetts Amherst, Amherst, MA,
USA and a Visiting Researcher at Microsoft Research,
Redmond, WA, USA.
Stephen M. Blackburn ( email@example.com) is
a Professor in the Research School of Computer Science,
Australia National University, Canberra, ACT, Australia.
Carla Brodley ( firstname.lastname@example.org) is Dean of
the Khoury College of Computer Sciences at Northeastern
University, Boston, MA, USA.
H.V. Jagadish ( email@example.com) is Bernard A. Galler
Collegiate Professor of Electrical Engineering and Computer
Science at the University of Michigan, Ann Arbor, MI, USA.
Kathryn S. McKinley ( firstname.lastname@example.org) is a Senior
Staff Research Scientist at Google, Seattle, WA, USA.
Mario A. Nascimento ( email@example.com)
is Chair of the Department of Computing Science at the
University of Alberta, Edmonton, AB, Canada.
Minjeong Shin ( firstname.lastname@example.org) is a Ph. D
candidate in the Research School of Computer Science,
Australia National University, Canberra, ACT, Australia.
Kuansan Wang (Kuansan. Wang@microsoft.com) is
Managing Director of MSR Outreach Academic Services at
Microsoft Research, Redmond, WA, USA.
Lexing Xie ( email@example.com) is a Professor in the
Research School of Computer Science, Australia National
University, Canberra, ACT, Australia.
Copyright held by authors.
No one is sufficiently knowledgeable
about all aspects of computer science
and all departments to even make an
informed guess about the broad range
of work in an entire department. In fact,
a “mid-rank” department is often the
most difficult to assess by reputation because the department may be particularly strong in some sub-areas but weaker
in others, that is, the subjective rating of
the department may vary greatly depending on the sub-area of the assessor.
To summarize, rankings matter and
will not go away, regardless of their shortcomings. Commercial rankers today do
a poor job of ranking computer science
departments. Since we understand our
community and what matters, we should
take control of the ranking process.
At the very least, we as a community
should insist on rankings derived from
objective data, whether it be based on
publications, citations, honors, funding,
or other criteria. We should ensure rankings are well-founded, based on meaningful metrics, even if we have diverging
perspectives on how best to fold the data
into a scalar score or rank. We may still arrive at very different rankings, but we will
have a defensible basis for comparisons.
Toward this end, the Computing
Research Association (CRA) has stated
that a “methodology [which] makes in-
ferences from the wrong data without
transparency” ought to be ignored. 1 It
has also adopted the following state-
ment about best practices:
“CRA believes that evaluation meth-
odologies must be data-driven and
meet at least the following criteria:
˲ Good data: have been cleaned and
˲ Open: data is available, regarding
attributes measured, at least for verification
˲ Transparent: process and methodologies are entirely transparent
˲ Objective: based on measurable at-
We call rankings that meet these cri-
teria GOTO Rankings. Today, there are
at least two GOTO rankings: http://cs-
rankings.org and http://csmetrics.org
(both are linked from the site http://
gotorankings.org). CSrankings is fac-
ulty-centric and based on publications
at top venues, providing links to fac-
ulty home pages, Google Scholar pro-
files, DBLP pages, and overall publica-
tion profiles. It ranks departments by
aggregating the full-time tenure-track
faculty at each institution. CSmetrics
is institution-focused, without regard
to department structure or job desig-
nations for paper authors. It includes
industrial labs and takes citations into
account. It derives its rankings from
the Microsoft Academic Graph, 3 an
open and frequently updated dataset.
These are not the only two reasonable ways to rank departments. 5 One
may disagree with the rankings these
sites produce, or with their choices of
weighting schemes or venue inclusion.
Figure 1. U.S. News and World Report inaccurate research area description ( https://www.usnews.
com/best-graduate-schools/top-science-schools/computer-programming-rankings, May 2018).
Figure 2. U.S. News and World Report implausible ranking ( https://www.usnews.com/
education/best-global-universities/computer-science, May 2018).