world setting there is a fixed or predetermined pool of people or resources.
For example, assigning papers for review in a closed-world setting assumes
a program committee or editorial board
has already been assembled, and hence
the main task is one of matching papers to potential reviewers. In contrast,
in an open-world setting the task becomes one of finding suitable experts.
Similarly, in a closed-world setting an
author has already decided which conference or journal to send their paper
to, whereas in an open-world setting
one could imagine a recommender system that suggests possible publication
venues. The distinction between closed
and open worlds is gradual rather than
absolute: indeed, the availability of a
global database of potential publication venues or reviewers with associated
metadata would render the distinction
one of scale rather than substance. Nevertheless, it is probably fair to say that,
in the absence of such global resources,
current opportunities tend to be focus
on closed-world settings. Here, we review research on steps II, III and V, starting with the latter two, which are more
of a closed-world nature.
Assigning Papers for Review
In the currently established academic
process, peer review of written works
depends on appropriate assignment
to several expert peers for their review.
Identifying the most appropriate set of
reviewers for a given submitted paper is
a time-consuming and non-trivial task
for conference chairs and journal editors—not to mention funding program
managers, who rely on peer review for
funding decisions. Here, we break the
review assignment problem down into
its matching and constraint satisfaction constituents, and discuss possibilities for computational support.
Formally, given a set P of papers
with |P| = p and a set R of reviewers with
|R|= r, the goal of paper assignment is
to find a binary matrix Ar×p such that
Ai j = 1 indicates the i-th reviewer has
been assigned the j-th paper, and Ai j =
0 otherwise. The assignment matrix
should satisfy various constraints, the
most typical of which are: each paper
is reviewed by at least c reviewers (typi-
cally, c = 3); each reviewer is assigned no
more than m papers, where m = O (pc/r);
and reviewers should not be assigned
in a large envelope, and send them to
the program chair of the conference,
taking into account that international
mail would take 3–5 days to arrive. On
their end, the program chair would
receive all those envelopes, allocate
the papers to the various members
of the program committee, and send
them out for review by mail in another
batch of big envelopes. Reviews would
be completed by hand on paper and
mailed back or brought to the program
committee meeting. Finally, notifica-
tions and reviews would be sent back
by the program chair to the authors by
mail. Submissions to journals would
follow a very similar process.
It is clear that we have moved on
quite substantially from this paper-based process—indeed, many of the
steps we describe here would seem
arcane to our younger readers. These
days, papers and reviews are submitted
online in some conference management system (CMS), and all communication is done via email or via message
boards on the CMS with all metadata
concerning people and papers stored
in a database backend. One could argue this has made the process much
more efficient, to the extent that we
now specify the submission deadline
up to the second in a particular time
zone (rather than approximately as the
last post round at the program chair’s
institution), and can send out hundreds if not thousands of notifications
at the touch of a button.
Computer scientists have been studying automated computational support
for conference paper assignment since
pioneering work in the 1990s.
14 A range
of methods have been used to reduce the
human effort involved in paper allocation, typically with the aim of producing
assignments that are similar to the ‘gold
standard’ manual process.
9, 13, 16, 18, 30, 34, 37
Yet, despite many publications on this
topic over the intervening years, research results in paper assignment have
made relatively few inroads into mainstream CMS tools and everyday peer
review practice. Hence, what we have
achieved over the last 25 years or so appears to be a streamlined process rather
than a fundamentally improved one: we
believe it would be difficult to argue the
decisions taken by program committees
today are significantly better in comparison with the paper-based process. But
this doesn’t mean that opportunities for
improving the process don’t exist—on
the contrary, there is, as we demonstrate
in this article, considerable scope for
employing the very techniques that researchers in machine learning and artificial intelligence have been developing
over the years.
The accompanying table recalls the
main steps in the peer review process
and highlights current and future opportunities for improving it through advanced computational support. In discussing these topics, it will be helpful to
draw a distinction between closed-world
and open-world settings. In a closed-
A chronological summary of the main activities in peer review, with opportunities for
improving the process through computational support.
Actor Activity What can be done now What might be done in future
I Author Paper submission Recommender systems for
publication venue;
papers carry full previous
reviewing history
II Program chair Assembling
program
committee
Expert finding PCs for an area rather than a
single conference;
workload balancing
III Program chair Assigning papers
for review
Bidding and assignment
support
Extending PCs based on
submitted papers
IV Reviewer Reviewing papers Advanced reviewing tools that
find related work
and map the paper under review
relative to it
V Program chair Discussion and
decisions
Reviewer score
calibration
More outcome categories;
recommender systems
for outcomes; more decision
time points