Approximate Computing workshop, a
number of researchers are working in
How It Works
The main stages of design for approximate computing are ( 1) identifying
those elements of an application that
can tolerate error, ( 2) calculating the
extent of error that can be tolerated, ( 3)
discovering performance or energy savings, and ( 4) executing the instruction.
1. Where can errors be tolerated?
First, the kernels where error can be
tolerated need to be identified. It is
hugely time-consuming to identify all
the combinations and their computational accuracy, plus the potential energy savings.
The early research of Sasa Misailovic and his colleagues at the Massachusetts Institute of Technology
(MIT) Computer Science and Artificial
Intelligence Laboratory focused on
enabling programs to perform less
work and therefore trade accuracy
for faster, or more energy-efficient,
execution. The team delivered compiler transformations that, for example, skipped regions of code that “are
time-consuming, but do not substantially affect the accuracy of the program’s result,” says Misailovic.
At the 2013 Object-Oriented Programming, Systems, Languages and
Applications (OOPSLA) conference,
the MIT team unveiled Rely (http://
mcarbin.github.io/rely/), a language
developed to indicate which instructions can be processed by less-reliable
hardware, to a specified probability.
2. What is the tolerable error?
Introducing deliberate errors goes
against the grain, but a certain degree
of inaccuracy can be tolerated by the
user in certain aspects of programming. One example is video rendering,
where the eye and brain fill in any missing pixels. Other applications where a
certain percentage of error can be tolerated without affecting the quality of
the result as far as the enduser is concerned include:
˲ wearable electronics
˲ voice recognition
˲ scene reconstruction
˲ Web search
˲ fraud detection
˲ financial and data analysis
˲ process monitoring
˲ tracking tags and GPS
˲ audio, image, and video processing and compression (as in Xbox
and PS3 videogaming).
The common factor here is that
100% accuracy is not needed, so there
is no need to waste energy computing
it. But how much error is too much?
At last year’s OOPSLA conference,
the same MIT team presented a system called Chisel (http://groups.csail.
mit.edu/pac/chisel/), a simulation
program that identifies elements of
programming that can tolerate error,
extending Rely’s analysis approach.
Chisel can calculate how much error
can be tolerated, evaluating the percentage of improperly rendered pixels
at which the user will notice an error.
3. How can energy be saved?
As their contribution to the issue, Chippa
from one to another, but it increases
the probability of a spurious switch.
What Is Approximate Computing?
Historically, computer platform design has been a quest for ever-increas-ing accuracy, following the principle
that every digital computation must be
executed correctly. As Hadi Esmaeilzadeh and colleagues put it in their
paper “General-purpose code acceleration with limited-precision analog
computation,” “[c]onventional techniques in energy-efficient computing
navigate a design space defined by
the two dimensions of performance
and energy, and traditionally trade
one for the other. General-purpose approximate computing explores a third
dimension—error—and trades the
accuracy of computation for gains in
both energy and performance.” They
use machine learning-based transformations to accelerate approximation-tolerant programs.
V.K. Chippa and colleagues in Purdue’s Integrated Systems Laboratory
are exploring scalable effort design to
achieve improved efficiency (power or
performance) at the algorithm, architecture, and circuit levels while maintaining an acceptable (and frequently,
nearly identical) quality of the overall
result. Chippa et al. (2013) acknowledged that “applications are often intrinsically resilient to a large fraction of
their computations being executed in
an imprecise or approximate manner,”
described as approximate computing,
or “good-enough” computing, with the
aim of increasing efficiency/reducing
energy consumption. The idea is that
error-tolerant processes can be run on
less-reliable hardware that operates
faster, uses less energy, and/or is less
likely to burn up.
Approximation is not a new idea, as
it has been used in areas such as lossy
compression and numeric computation; in fact, John von Neumann wrote
a paper on it in 1956 (Probabilistic logic
and the synthesis of reliable organisms
from unreliable components,
Automata Studies (C.E. Shannon and J. McCarthy, Eds.), Princeton University Press).
According to a Computing Community Consortium blog post on the U.S.
Defense Advanced Research Projects
Agency (DARPA) 2014 Information Science and Technology (ISAT) Targeted
The cost–accuracy trade-off.
Relaxed programs can dynamically and automatically adapt
Admits executions at multiple points in tradeoff space