Scaling the academic publication
process to Internet Scale
A proposal to remedy problems in the reviewing process.
The revIe WINg proceSS
computer science conferences originated in the pre-Internet era. In this process,
authors submit papers that
are anonymously reviewed by program committee (PC) members and
their delegates. Reviews are typically
single-blind: reviewers know the identity of the authors of a paper, but not
vice versa. At the end of the review process, authors are informed of paper
acceptance or rejection and are also
given reviewer feedback and (usually)
scores. Authors of accepted papers
use the reviews to improve the paper
for the final copy, and the authors of
rejected papers use them to revise and
resubmit them elsewhere, or withdraw
Some conferences within the broader computer science community modify this process in one of three ways.
With double-blind reviewing, reviewers
do not know (or, at least, pretend not
to know) the authors. With shepherding, a PC member ensures that authors
of accepted papers with minor flaws
make the revisions required by the PC.
And, with rollover, papers that could
not be accepted in one conference are
automatically resubmitted to another,
illustration by Jon han
Surprisingly, the advent of the Internet has scarcely changed this process.
Everything proceeds as before, except
that papers and reviews are submitted online or by email, and the paper
discussion and selection process is
conducted, in whole or in part, online.
A naive observer, seeing the essential
structure of the reviewing process preserved with such verisimilitude, may
come to the conclusion that the process has achieved perfection, and that
is why the Internet has had so little impact on it. Such an observer would be,
sadly, rather mistaken.
Problems with the current
We believe the paper review process
suffers from at least five problems:
˲ A steady increase in the total number of papers: Because the number of experienced reviewers does not appear to
be growing at the same rate, this has increased the average reviewer workload.
˲ Skimpy reviews: Some reviewers do
a particularly poor job, giving numeric
scores with no further justification.
˲ Declining paper quality: Although
the best current papers are on par with
the best papers of the past, we have
found a perceptible decline in the quality of the average submitted paper.
˲ Favoritism: There is a distinct perception that papers authored by researchers with close ties to the PC are
preferentially accepted with an implicit or overt tit-for-tat relationship.
˲Overly negative reviews: Some
people enjoy finding errors in other
people’s work. But this often results
in reviews that are overly negative, disheartening beginner authors.
These problems are interrelated.
The increase in the number of papers
leads, at least partly, both to a decline
in paper quality and a decline in the