System engineers, as a result, turn to statistical multiplexing for
maximizing the utilization of today’s CPUs. Informally, statistical multiplexing allows a single resource to be shared by splitting it into variable chunks and allocating each to a consumer. In the meantime,
But even with virtualization, the question persists: What if the
physical resources run out? If that ever occurred, the provider would
simply have to refuse service, which is not what users want to hear.
Currently, for most users, EC2 only allows 20 simultaneous
machine instances to be allocated at any time. Another option might
be to preempt currently running processes. Although both are unpop-
ular choices, they certainly leave room for the provider to offer flexi-
ble pricing options. For instance, a provider can charge a normal price
for low-grade users, who might be fine with having their service inter-
rupted very infrequently. High-grade users, on the other hand, can pay
a surplus for having the privilege to preempt services and also to pre-
vent from being preempted.
With the realization of cloud computing, many stakeholders are
afforded on-demand access to utilize any amount of computing power
to satisfy their relative needs. The elastic paradigm brings with it exciting new development in the
computing community. Certainly, scaling applications to
handle peak loads has been
a long-studied issue.
While downscaling has
received far less attention in
the past, the cloud invokes a
novel incentive for applications to contract, which offers a new dimension for cost optimization problems. As clouds gain pace in industry and
academia, they identify new opportunities and may potentially transform computing, as we know it.
❝What elasticity means to cloud users is
that they should design their applications
to scale their resource requirements up
and down whenever possible.❞
David Chiu is a student at The Ohio State University and an editor for