The Ultimate Cloud
By David P. Anderson
Computers continue to get faster exponentially, but the computational demands of science are growing even faster. Extreme requirements arise in at least three areas.
1) Physical simulation: Scientists use computers to simulate physical reality at many levels of scale: molecule, organism, ecosystem, planet,
galaxy, universe. The models are typically chaotic, and studying the
distribution of outcomes requires many simulation runs with perturbed initial conditions.
2) Compute-intensive analysis of large data: Modern instruments (
optical and radio telescopes, gene sequencers, gravitational wave detectors, particle colliders) produce huge amounts of data, which in
many cases requires compute-intensive analysis.
3) Biology-inspired algorithms such as genetic and flocking algorithms
for function optimization.
puting, the resource pool is the set of all privately-owned PCs in the
world. This pool is interesting for several reasons.
For starters, it dwarfs the other pools. The number of privately-owned PCs is currently 1 billion and is projected to grow to 2 billion by
2015. Second, the pool is self-financing, self-updating and self-maintaining. People buy new PCs, upgrade system software, maintain their
computers, and pay their electric bills. Third, consumer PCs, not spe-cial-purpose computers, are state of the art. Consumer markets drive
research and development. For example, the fastest processors today are
GPUs developed for computer games. Traditional HPC is scrambling to
use GPUs, but there are already 100 million GPUs in the public pool,
and tens of thousands are already being used for volunteer computing.
These areas engender computational tasks that would take hundreds or thousands of years to complete on a single PC. Reducing this
to a feasible interval—days or weeks—requires high-performance computing (HPC). One approach is to build an extremely fast computer—
a supercomputer. However, in the areas listed above, the rate of job
completion, rather than the turnaround time of individual jobs, is the
important performance metric. This subset of HPC is called high-throughput computing.
To achieve high throughput, the use of distributed computing,
in which jobs are run on networked computers, is often more cost-effective than supercomputing. There are many approaches to distributed computing:
• cluster computing, which uses dedicated computers in a single location.
• desktop grid computing, in which desktop PCs within an organization (such as a department or university) are used as a computing
resource. Jobs are run at low priority, or while the PCs are not being otherwise used.
• grid computing, in which separate organizations agree to share
their computing resources (supercomputers, clusters, and/or desktop grids).
• cloud computing, in which a company sells access to computers on
a pay-as-you-go basis.
• volunteer computing, which is similar to desktop grid computing except that the computing resources are volunteered by the public.
Each of these paradigms has an associated resource pool: the computers in a machine room, the computers owned by a university, the
computers owned by a cloud provider. In the case of volunteer com-
History of Volunteer Computing
In the mid-1990s, as consumer PCs became powerful and millions of
them were connected to the Internet, the idea of using them for distributed computing arose. The first two projects, GIMPS and distrib-uted.net, were launched in 1996 and 1997. GIMPS finds prime
numbers of a particular type, and distributed.net breaks cryptosys-tems via brute-force search of the key space. Both projects attracted
tens of thousands of volunteers and demonstrated the feasibility of
In 1999 two new projects were launched, SETI@home and
Folding@home. SETI@home from University of California-Berkeley
analyzes data from the Arecibo radio telescope, looking for synthetic
signals from space. Folding@home, from Stanford, studies how proteins are formed from gene sequences. These projects received significant media coverage and moved volunteer computing into the
awareness of the global public.
These projects all developed their own middleware, the applica-tion-independent machinery for distributing jobs to volunteer computers and for running jobs unobtrusively on these computers, as well
as web interfaces by which volunteers could register, communicate
with other volunteers, and track their progress. Few scientists had the
resources or skills to develop such software, and so for several years
there were no new projects.
In 2002, with funding from the National Science Foundation, the
BOINC project was established to develop general-purpose middleware for volunteer computing, making it easier and cheaper for scientists to use.
The first BOINC-based projects launched in 2004, and today there
about 60 such projects, in a wide range of scientific areas. Some of the
larger projects include Milkyway@home (from Rensselaer Polytechnic
Institute; studies galactic structure), Einstein@home (from University
of Wisconsin and Max Planck Institute; searches for gravitational
Spring 2010/ Vol. 16, No. 3