practice
Doi: 10.1145/1646353.1646370
Article development led by
queue.acm.org
Power-manageable hardware can help save
energy, but what can software developers do
to address the problem?
BY eRic sAxe
Power-efficient
software
THE rATE AT which power-management features
have evolved is nothing short of amazing. Today
almost every size and class of computer system, from
the smallest sensors and handheld devices to the
“big iron” servers in data centers, offer a myriad of
features for reducing, metering, and capping power
consumption. Without these features, fan noise would
dominate the office ambience and untethered laptops
would remain usable for only a few short hours (and
only then if one could handle the heat), while data-center power and cooling costs and capacity would
become unmanageable.
As much as we might think of power-management
features as being synonymous with hardware,
software’s role in the efficiency of the overall system
has become undeniable. Although the notion of
“software power efficiency” may seem justifiably
strange (as software doesn’t directly consume power),
the salient part is really the way in
which software interacts with power-consuming system resources.
Let’s begin by classifying software
into two familiar ecosystem roles:
resource managers (producers) and
resource requesters (consumers). We
will then examine how each can contribute to (or undermine) overall system efficiency.
The history of power management is
rooted in the small systems and mobile
space. By today’s standards, these systems were relatively simple, possessing
a small number of components, such
as a single-core CPU and perhaps a disk
that could be spun down. Because these
systems had few resources, utilization
in practice was fairly binary in nature,
with the system’s resources either being in use—or not. As such, the strategy
for power managing resources could
also be fairly simple, yet effective.
For example, a daemon might periodically monitor system utilization
and, after appearing sufficiently idle
for some time threshold, clock down
the CPU’s frequency and spin down
the disk. This could all be done in a way
that required little or no integration
with the subsystems otherwise responsible for resource management (for
example, the scheduler, file system,
among others), because at zero utilization, not much resource management
needed to be done.
By comparison, the topology of modern systems is far more complex. As
the “free performance lunch” of ever-increasing CPU clock speeds has come
to an end, the multicore revolution is
upon us, and as a consequence, even
the smallest portable devices present
multiple logical CPUs that need to be
managed. As these systems scale larger
(presenting more power-manageable
resources), partial utilization becomes
more common where only part of the
system is busy while the rest is idle.
Of course, CPUs present just one example of a power-manageable system
resource: portions of physical memory
may (soon) be power manageable, with
the same being true for storage and