Chisel computes how much energy
can be saved. By the simple expediency
of allowing errors in processing, a computer’s power consumption may be reduced by 9%–19%, according to the MIT
research simulations. The amounts given by other researchers vary, but there
are significant savings to be had.
4. How can the instruction
The mechanism in Rely is the use of an
operator that indicates the instruction
can be operated on unreliable hardware in order to save energy. Previously, the period had to be inserted manually, but Chisel inserts Rely’s operators
automatically, also guaranteeing maximized energy savings.
The Developer View
The counterintuitiveness of tolerating
error is a common concern among developers, according to Ceze and his colleagues at UW.
The MIT team is developing rigorous
approaches to help developers understand and control the approximation
technique, in the process changing
the perspective of many who were
initially reluctant. “Some of these developers are excited by the promise of
potential performance improvements
and energy savings,” says Misailovic.
“Others look to our techniques for a
way to cope with future trends in the
design of hardware circuits, which may
require hardware to be less reliable.
And still others see our techniques as
providing novel ways to deal more effectively with software errors, which remain ubiquitous throughout our entire
The Rely and Chisel systems and oth-
ers (such as Accept, which applies a
variety of approximation techniques,
including hardware acceleration; Flik-
ker, which uses critical data partition-
ing to save refresh power, and Preci-
monious, which assists developers in
tuning the precision of floating-point
programs) have created the possibil-
ity of off-the-shelf programming that
can identify where errors can be toler-
ated, indicate the degree of inaccuracy
of computation that can be tolerated,
calculate the energy that can be saved,
and insert the operators that control
the computations. The savings in en-
ergy can be significant, with little no-
ticeable loss in quality.
Research into approximate computing is still in its infancy. Error-tolerant applications combined with
energy-efficient programming would
seem to be the way forward, according to Baek and Chilimbi in an article about Green, their framework for
The ongoing development of tools
and frameworks continues to simplify
the practical implementation of approximate computing in a number
of ways, which means it would seem
“good-enough” computing may be
here for good.
Baek, W., and Chilimbi, T.M. (2010).
Green: A framework for supporting energy-conscious programming using controlled
approximation. Proceedings of the PLDI
(pp. 198–209). http://bit.ly/1vG9NOB
Chippa, V.K., Venkataramani, S.,
Chakradhar, S. T., Roy, K., and Raghunathan, A.
Approximate computing: An integrated
hardware approach. Asilomar conference
on Signals, Systems and Computers,
Pacific Grove, CA (pp. 111–117).
Washington, DC: IEEE Computer Society.
Drobnis, A. (June 23, 2014).
ISAT/DARPA Workshop Targeted
Approximate Computing. http://bit.ly/1xZct9a
Khan, A.I., Chatterjee, K., Wang, B.,
Drapcho, S., You, L., Serrao, C., Bakaul, S.R.,
Ramesh, R., and Salahuddin, S. (2014).
Negative capacitance in a ferroelectric
capacitor. Nature Materials,
December 15, 2014.
Moreau, T., Wyse, M., Nelson, J., Sampson, A.,
Esmaeilzadeh, H., Ceze, L., and Oskin, M. (2015).
SNNAP: Approximate computing
on programmable SoCs via neural
acceleration. 2015 International
Symposium on High-Performance Computer
St. Amant, R., Yazdanbakhsh, A., Park, J.,
Thwaites, B., Esmaeilzadeh, H., Hassibi, A.,
Ceze, L., and Burger, D. (2014).
General-purpose code acceleration with
limited-precision analog computation.
Proceedings of the 41st International
Symposium on Computer Architecture.
Logan Kugler is a freelance technology writer based in
Tampa FL. He has written for over 60 major publications.
© 2015 ACM 0001-0782/15/05 $15.00
and other members of the Purdue group
proposed the concept of Dynamic Effort
Scaling, leveraging error resilience to increase efficiency. Recognition and mining (RM) are emerging computer processing capabilities anticipated on future
multi-core and many-core computing
platforms. To close the gap between the
high computational needs of RM applications and the capabilities of the platforms, the Purdue group revealed at the
International Symposium on Low Power
Electronics and Design (ISLPED ‘14) its
proposed energy-efficient Stochastic R
ec-ognition and Mining (StoRM) processor,
which the group said will lead to energy
savings with minimal quality loss.
Luis Ceze and his colleagues at the
University of Washington (UW) have
been working on approximate computing for more than five years, using
a more coarsely grained approach to
approximation than other researchers. One unique aspect of their work
is hardware-software co-design for approximate computing. Control in modern processors accounts for a significant fraction of hardware resources (at
least 50%), which fundamentally limits
approximation savings. The UW team
found that using limited-precision
analog circuits for code acceleration,
through a neural approach, is both feasible and beneficial for approximation-tolerant applications. The UW group’s
hardware model—SNNAP (systolic
neural network accelerator in programmable logic)—assesses the effect of approximation output. It works with the
neural network, accelerating approximate code, removing the need to fetch
and decode individual instructions.
Says Ceze, “Applications that we do well
in the digital neural processing unit on
FPGAs [field-programmable gate arrays]
(that is, the SNNAP work) are financial
analysis apps, robotics control systems,
and computer vision. The analog version ( http://bit.ly/1zLkric) also shows
great promise in game physics engines
and machine learning applications.”
In January, the UW group published
in Communications on the technique of
using neural networks as general-purpose approximate accelerators. Such
a system chooses a block of approximate code and learns how it behaves
using a neural net; then it involves the
neural net, as opposed to executing
the original code.