who do wish to publish them, there
remains the problem of how and
where. ACM’s digital library would
be a natural host, and recent changes
have made it possible for authors to
deposit artifacts there without surrendering their copyright. Yet, the interface to the digital library is less than
optimal; there are also problems with
the current terms. We would prefer to
use technologies that better support
accessing artifacts. Furthermore, the
digital library only hosts static artifacts; it would be worthwhile for it to
consider combining forces with resources such as runmycode.org and
researchcompendia.org.
We have come a long way. In our
efforts to become more “scientific,”
we have moved away from papers that
simply report on software projects
to demanding that papers distill the
novel contributions of these projects.
In the process, however, we may have
shifted too far, even as natural science
itself has taken a lead on demanding
repeatability, data sets, and public access to software; demands we recognize the need for and hence should
have spearheaded. We should let the
pendulum swing back to a happy medium between scientific contributions
and software contributions, recognizing that ultimately, software is indeed
the single most distinctive contribution our discipline has to make. So we
should embrace it rather than act as
if we are ashamed of it. While we report on one particular experiment in
the area of programming language research, many other areas in computer
science are looking at some of the
same issues. References to other initiatives are included in the sidebar; also
see http://www.artifact-eval.org
Shriram Krishnamurthi ( sk@cs.brown.edu) is a professor
of computer science at Brown University in Providence, RI.
Jan Vitek ( j.vitek@neu.edu) is a professor of computer
science at Northeastern University in Boston, MA.
The authors thank Andreas Zeller for taking the personal
risk involved in initiating this Viewpoint. We thank
Matthias Hauswirth for his enthusiasm and artwork, and
Matthias, Steve Blackburn, and Camil Demetrescu for
several good conversations. We thank our AEC co-chairs
Eric Eide, Erik Ernst, and Carlo Ghezzi for their hard
work. Most of all, we thank the AEC members for their
diligent efforts, often above and beyond the call of duty,
and the authors for giving the AEC members something
to evaluate.
Copyright held by authors.
committee members are in high
demand. In addition, some of them
are not always familiar with modern software tools and systems. We
therefore think it best the AECs be
populated by senior Ph.D. and postdoctoral researchers. This choice
has several benefits. First, they are
familiar with the technology needed
to build and run artifacts. Second,
in our experience, they respond with
alacrity and write detailed reviews
in a timely manner. Finally, and
more subtly, we feel getting junior
researchers involved in the process
sends a message of its importance
to those who will be future research
leaders. One caution is that junior researchers can sometimes be overly eager at fault-finding, and their reviews
may need moderation. This is why the
AEC is chaired by senior researchers.
What are the benefits of artifact
evaluation? The first benefit of the
process is it sends a message that ar-
tifacts are valued and are an impor-
tant part of the contribution of papers
published in programming language
conferences. Papers found to be at or
above threshold get a little extra rec-
ognition, both in the proceedings and
at the conference. They are marked
with a special logo and distinguished
in the conference proceedings. A
handful of papers are selected for Dis-
tinguished Artifact Awards. Another
benefit comes from the reviews them-
selves: several authors have confirmed
the evaluators provided valuable feed-
back and even small bug fixes on the
artifacts and on their packaging. At
ECOOP 2013, for instance, some au-
thors even claimed the artifact reviews
were more useful than the reviews of
the paper. For the scientific commu-
nity at large, artifact evaluation en-
courages authors to produce reusable
artifacts, which are the cornerstone of
future research.
Should artifacts be published?
While there are many good reasons for
making the artifact available, there are
also arguments against making artifacts public:
˲ The artifact may have been produced in a company and may therefore
be regarded as proprietary.
˲ The data used in the paper’s experiments may be proprietary or have high
privacy needs.
˲ The artifact may depend on expensive or proprietary platforms that are
difficult or impossible for anyone but
the authors to access.
˲ By making the tools public, it becomes easy for others to continue that
line of research, which reduces the payoff for the original researchers.
Reasonable people have come to opposite conclusions on each of these issues. In some cases, a different incentive structure might help. At any rate, it
is clear that in some situations repeatability may be off limits; but these cases seem rare enough that they should
not dominate the discussion.
In the long term, we would like to
see evaluated artifacts be made public by mandate, as SAS 2013 did. Even
as it remains optional, for authors
Artifact evaluation
encourages authors
to produce reasonable
artifacts, which
are the cornerstone
of future research.
The ECML/PKDD’ 13 conference
started an open science award process
similar to the artifact evaluation
process described here.e The SIGMOD
conference evaluated repeatability from
2008 to 2011.f, g The ICERM workshop
on reproducibility in computational
and experimental mathematics
produced a report that argues for
a culture shift.h Journals such as
Biostatistics are recognizing papers
that are accompanied by artifacts.i
e http://www.ecmlpkdd2013.org/open-
science-award/.
f Manegold, S. et al. Repeatability and
Workability Evaluation of SIGMOD 2009.
SIGMOD Record, September 2009.
g http://www.sigmod2011.org/calls_papers_sig-
mod_research_repeatability.shtml.
h h http://icerm.brown.edu/html/
programs/topical/tw12_ 5_rcem/
icerm_report.pdf.
i http://www.oxfordjournals.org/
our_journals/biosts/for_authors/msprep_
submission.html.