they need to rely on the judgment of
experts to determine scientific truth
and how to interpret scientific results.
We want policymakers in the administration and Congress to base policy
decisions on facts, on evidence, and on
data. So it is important for policymakers that, to the best of our ability, we, as
scientists, publish results that are correct. That’s why peer review matters.
While I argue peer review matters,
it’s a whole other question of what the
best process is for carrying out peer review. In this day and age of collective
intelligence through social networks,
we should think creatively about how
to harness our own technology to supplement or supplant the traditional
means used by journals, conferences,
and funding agencies. Peer review matters, and now is the time to revisit our
processes—not just procedures and
mechanisms, but what it is we review
(papers, data, software, and tools), our
evaluation criteria, and our incentives
for active participation.
It is important for us, as scientists, not to
lose the public trust in science. That’s why
peer review matters.
I think we must continue to educate
our students and the public about truth.
Even if a research paper is published in the
most respectable venue possible, it could
still be wrong. Conventional peer review is
essentially an insider game: It does nothing
against systematic biases.
In physics, almost everyone posts
his papers on arXiv. It is not peer review
in the conventional sense. Yet, our trust
in physics has not gone down. In fact,
Perelman proved the Poincaré conjecture
and posted his solution on arXiv, bypassing
conventional peer review entirely. Yet, his
work was peer reviewed, and very carefully.
We must urgently acknowledge that our
traditional peer review is an honor-based
system. When people try to game the
system, they may get away with it. Thus, it is
not the gold standard we make it out to be.
Moreover, conventional peer review puts
a high value in getting papers published.
It is the very source of the paper-counting
routine we go through. If it was as easy to
publish a research paper as it is to publish
a blog post, nobody would be counting
research papers. Thus, we must realize that
conventional peer review also has some
Yes, we need to filter research papers.
But the Web, open source software, and
Wikipedia have shown us that filtering after
publication, rather than before, can work
too. And filtering is not so hard.
Filtering after publication is clearly the
future. It is more demanding from an I T point
of view. It could not work in a paper-based
culture. But there is no reason why it can’t
work in the near future. And the Perelman
example shows that it already works.
Ed h. Chi
“how Should Peer
Peer review publications
have been around scientific academic scholarship since 1665, when the
Royal Society’s funding editor Henry
Oldenburg created the first scientific
journal. As Jeannette Wing nicely argued in her “Why Peer Review Matters”
post, it is the public, formal, and final
archival nature of the process of the
Oldenburg model that established the
importance of publications to scientific authors, as well as their academic
standings and careers.
Recently, as the communication
of research results reaches breakneck
speeds, some have argued that it is time
to fundamentally examine the peer review model, and perhaps to modify it
somewhat to suit the modern times.
One such proposal recently posed to me
via email is open peer review, a model
not entirely unlike the Wikipedia editing model in many ways. Astute readers
will realize the irony of how the Wikipedia editing model makes academics
squirm in their seats.
The proposal for open peer review
suggests that the incumbent peer review process has problems in bias,
suppression, and control by elites
against competing non-mainstream
theories, models, and methodologies.
By opening up the peer review system,
we might increase accountability and
transparency of the process, and mitigate other flaws. Unfortunately, while
we have anecdotal evidence of these
issues, there remains significant problems in quantifying these flaws with
hard numbers and data, since reviews
often remain confidential.
Perhaps more distressing is that sev-
eral experiments in open peer review
(such as done by Nature in 2006, British
Medical Journal in 1999, and Journal of
Interactive Media in Education in 1996)
have had mixed results in terms of the
quality and tone of the reviews. Inter-
estingly, and perhaps unsurprisingly,
many of those who are invited to review
under the new model decline to do so,
potentially reducing the pool of review-
ers. This is particularly worrisome for
academic conferences and journals, at
a time when we desperately need more
reviewers due to the growth of the num-
ber of submissions.
Jeannette M. Wing is a professor at carnegie Mellon
university. Ed h. Chi is a research scientist at google.
© 2011 acM 0001-0782/11/07 $10.00