Viewpoint
constantly changing databases we all access (think
Google) represent a new mode of computing and
deserves to be explored more systematically. Various
ideas are being developed on how to do this without
becoming entangled in critical production processes.
They present great opportunities for advancing
computer science and the technologies it makes possible. They also potentially involve extensive research
activities, as well as significant investment in research
infrastructure to enable the actual research. NSF
spends 25%–30% of its annual budget on instruments to advance science in other fields, but computer
science has not envisioned such large projects until
recently. 4
These observations led the CISE Directorate to
issue a call for proposals to create a “community proxy
responsible for facilitating the conceptualization and
design of promising infrastructure-intensive projects
identified by the computing research community to
address compelling scientific ‘grand challenges’ in
computing.” In September 2006, NSF chose the
Computing Research Association to create the Computing Community Consortium ( www.cra.org/ccc/);
now in operation, it looks to engage as many people
and institutions in the research, education, and industrial communities as possible to fulfill its charter. At
the heart of the effort is the understanding that major
experimentation can and should be done in many
cases before more expensive development and deployment are undertaken, something that industry alone
can’t afford to do.
All three efforts described here involve research
characterized by observation, measurement, and
analysis of results. While the same can be said of
many industrial prototyping efforts and should also be
true of small-scale academic research (such as thesis
work), they are either impossible to do under large-scale, real-world conditions (in the case of academic
research) or aren’t done at all due to the pressure to
produce near-term, profitable results. It’s rare for
experimentation to advance the boundaries of what
we know how to do in computer science on a scale
4EarthScope ( www.earthscope.org/) is an excellent example of how science and technology advances in other fields.
22
January 2008/Vol. 51, No. 1 COMMUNICATIONS OF THE ACM
that is large relative to the state of the art.
The “relativity” factor has all but eliminated the
kind of experimentation we did in the 1950s and
1960s. For example, in the mid-1960s, I was able and
encouraged to build a small (four-user) time-sharing
system on a minicomputer as a master’s thesis that
others could use in a production environment to see
how well it worked and how it might change operations [ 1]. Even though it was tiny by today’s standards, it was large relative to what existed then. I was
able to do it because there were no such commercial
systems then, and users were hungry for any improvement, even if it crashed some of the time. Today, it is
impossible to mount a similar operating systems project, relative to what is required technically and
expected by users.
This brings me back to the title of this column.
The projects I’ve described here and the efforts to
develop others portend the return of experimentation,
somewhat in the style of the early days of computer
science but with some important differences. First,
while we should and indeed will see much more serious experimentation in the future, it will certainly be
more costly than its counterparts years ago. Second, in
some projects—perhaps most, given the practical
nature of computing—experimenters must find ways
to involve significant numbers of users in the “
experiment”; this is a key feature of the GENI project.
Third, and most important, they must employ much
more careful observation, measurement, and analysis
than was necessary or possible 50 years ago. So, I hope
history really is repeating itself but this time improving what we do, how we do it, and the results all at
the same time. c
REFERENCE
1. Freeman, P. Design Considerations for Time-Sharing Systems on Small Computers. Master’s Thesis, University of Texas at Austin, 1965.
PETER A. FREEMAN ( freeman@cc.gatech.edu) is Emeritus Dean and
Professor at Georgia Tech, Atlanta, GA. As Assistant Director of the
National Science Foundation Directorate for Computer & Information
Science & Engineering, 2002–2007, he was involved in starting the
GENI project and the Computing Community Consortium.