the rate of client requests remains bounded even when a server comes
up after a long outage.
Security. Volunteer computing poses a variety of security chal-lenges. What if hackers break into a project server and use it to distribute malware to the attached computers? BOINC prevents this by
requiring that executables be digitally signed using a secure, offline
signing computer. What if hackers create a fraudulent project that
poses as academic research while in fact stealing volunteers’ private
data? This is partly addressed by account-based sandboxing: applications are run under an unprivileged user account and typically have no
access to files other than their own input and outputs. In the future,
stronger sandboxing may be possible using virtual machine technology.
Future of Volunteer Computing
Volunteer computing has demonstrated its potential for high-throughput scientific computing. However, only a small fraction of
this potential has been realized. Moving forward will require progress
in three areas.
1. Increased participation: The volunteer population has remained
around 500,000 for several years. Can it be grown by an order of magnitude or two? A dramatic scientific breakthrough, such as the discovery of a cancer treatment or a new astronomical phenomenon,
would certainly help it popularity. Or, the effective use of social networks like Facebook could spur more people to volunteer. Another
way to increase participation might be to have computer manufacturers or software vendors bundle BOINC with other products.
Currently, Folding@Home is bundled with the Sony Playstation 3 and
with ATI GPU drivers.
2. Increased scientific adoption: The set of volunteer projects is
small and fairly stagnant. It would help if more universities and institutions created umbrella projects, or if there were more support for
higher-level computing models, such as workflow management systems and MapReduce. Two other factors that would increase scientific
adoption are the promotion of volunteer computing by scientific funding agencies and and increased acceptance of volunteer computing by
the HPC and computer science communities.
3. Tracking technology: Today, the bulk of the world’s computing
power is in desktop and laptop PCs, but in a decade or two it may shift
to energy-efficient mobile devices. Such devices, while docked, could
be used for volunteer computing.
If these challenges are addressed, and volunteer computing experiences explosive growth, there will be thousands of projects. At this
point volunteers can no longer be expected to evaluate all projects,
and new allocation mechanisms will be needed. For example, the
“mutual fund” idea mentioned above, or something analogous to decision markets, in which individuals are rewarded for participating in
new projects that later produce significant results. Such “expert
investors” would steer the market as a whole.
David P. Anderson is a research scientist at the Space Sciences Laboratory at the University of California-Berkeley.
CLOUDS AT THE CROSSROADS
By Ymir Vigfusson and Gregory Chockler
Despite its promise, most cloud computing innovations have been almost exclusively driven by a few industry leaders, such as Google, Amazon, Yahoo!, Microsoft, and IBM. The involvement of a wider research community, both in academia and industrial labs, has so far been patchy
without a clear agenda. In our opinion, the limited participation stems from the prevalent view that
clouds are mostly an engineering and business-oriented phenomenon based on stitching together
existing technologies and tools.
Here, we take a different stance and claim that clouds are now
mature enough to become first-class research subjects, posing a range
of unique and exciting challenges deserving collective attention from
the research community. For example, the realization of privacy in
clouds is a cross-cutting interdisciplinary challenge, permeating the
entire stack of any imaginable cloud architecture.
The goal of this article is to present some of the research directions
that are fundamental for cloud computing. We pose various challenges that span multiple domains and disciplines. We hope these
questions will provoke interest from a larger group of researchers and
academics who wish to help shape the course of the new technology.
An Architectural View
The physical resources of a typical cloud are simply a collection of
machines, storage, and networking resources collectively representing
the physical infrastructure of the data center(s) hosting the cloud computing system. Large clouds may contain some hundreds of thousands
The distributed computing infrastructure offers a collection of core
services that simplify the development of robust and scalable services
on top of a widely distributed, failure-prone, physical platform. The
services supported by this layer typically include communication (for
example, multicast and publish-subscribe), failure detection, resource