July, or September of 2011. The April
and July review processes yield accepts,
rejects, and revise and resubmits. Submissions in September are either accepted or rejected. The Asian Conference on Machine Learning (ACML2012)
had two deadlines and the same model.
The ACM Computer Supported Cooperative Work 2012 (CSCW 2012) conference employed a single submission
date and a five-week revision period.
This is essentially a rapid version of a
journal special issue process: a submission deadline followed by reviewing and one round of revision that is
fully examined by the original reviewers. This process is discussed below. It
has now been used by ACM Interactive
Tabletops and Surfaces 2012, SIGMOD
2013, and CSCW 2013.
Adding a revision cycle resembles a
time-compressed variant of the usual
practice, where a rejected paper can be
resubmitted in a year, but the dynamic
is different. A reviewer does not have to
make an often stressful, binary, in-or-out recommendation. Reviewers have
more incentive to provide constructive,
complete reviews, rather than primarily identify flaws that justify rejection.
Authors who will get the same reviewers have more incentive to respond to
suggestions than when they resubmit
a rejected paper to a different conference with a new set of reviewers. Reviewers have to examine some submissions twice, but most find the second
review easier and are rewarded by seeing authors take their suggestions seriously. When a rejected paper is resubmitted elsewhere, the overall reviewing
burden for the community is greater
and less rewarding.
The CSCW 2012 experiment. As
program chairs for this large annual
conference, we were asked to move
the submission deadline forward two
months to reduce the reviewing overlap with the larger CHI conference. To
turn this lemon into lemonade, we inserted the revision cycle.
Computer Supported Cooperative
Work conferences averaged 256 submissions and 56 acceptances (22%)
from 2000–2011. In recent years, most
submissions were reviewed by two
program committee members (called
Associate Chairs or ACs) and three external reviewers, followed by a face-to-face meeting of the ACs.
table 1. approaches that merge
conference and journal elements.
journal acceptance precedes
Shepherded conference papers
become journal articles.
Conference reviewing incorporates
a revision cycle.
table 2. adding a revision cycle.
Higher quantity and attendance.
Zero-sum game eliminated.
More constructive review focus.
Reviewers see results of effort.
Few decisions passed to committee.
More periods of review activity.
Perception of acceptance rate.
More conference tracks?
Reviewer assignment still hard.
The 2012 conference received a
record 415 submissions. In the first
round, each submission was reviewed
by an AC and two external reviewers.
Forty-five percent were rejected and
authors notified. The authors of the
remaining 55% had five weeks to upload a revision and a document describing their responses to reviewer
comments. A few withdrew; the others
revised, often extensively. The revisions were reviewed by the original reviewers. After online discussion only
26 remained unresolved. The face-to-face program committee meeting
may have been unnecessary, but it had
been scheduled. Final decisions were
In the second round, 27% of the revisions were rejected. Overall, 39.5% of
the original submissions were judged
to have cleared the quality bar. The traditional process would have accepted
approximately 22% ‘as is’; this year all
papers were revised. The consensus
was that a high-quality conference had
become larger and much stronger.
Analysis of the CSCW 2011 data enabled us to focus reviewing where it
was needed—we did not increase the
overall reviewing burden. In 2011, 60%
of submissions found no advocate in
the early reviews and not one of those
was ultimately accepted. Yet each received a summary review by one AC,
was read by a second AC, and was given a rebuttal opportunity. For CSCW
2012, 45% had no advocate in the first
round and were summarily but politely rejected. The average workload per
submission was reduced despite each
“revise and resubmit” paper receiving
eight reviews by four reviewers over
the two rounds.
The number of reviews or serious
considerations fell from an average of
5. 5 per submission the previous year
to 4. 6, a reduction of 400. And 35% of
all reviews were of revisions, aided by
the authors’ documents describing
the changes. The rebuttal process was
not needed; dropping it led to no objections. Finally, the 60–80 papers that
were accepted because of the revision
process would have been prime candidates for resubmission elsewhere, so
the community was spared hundreds
of additional reviews downstream.
(Many conferences already have more
streamlined review processes than we
started with, but could still realize a
net reduction in effort by adopting a
Positive outcomes. First the good
news. Overall the reports were very
positive. The revise-and-resubmit option for borderline papers in the first
round reduced reviewer stress. Reviewers could focus on finding what might
be of interest and formulating constructive guidance for revision, rather
than identifying flaws that warrant rejection. Reviewers found it rewarding
to see authors who had responded well
to comments. An interesting benefit
was that acceptance was not a zero-sum game. We did not have a quota,
just the goal of keeping the same quality bar. Without the pressure to reject
75% to 80% of submissions, a four-per-son review team could iterate with an
author without disadvantaging other
authors. Some review teams engaged
in protracted online discussions. With
few decisions left to make, half of the
face-to-face meeting was spent discussing broader issues and planning