formalized and are potentially quite
impactful for certain applications.
1
However, there is limited evidence
that neural architectures can be more
powerful on generic applications
than the general purpose architectures used today.
33
The identification of neuromorphic technologies as a potential driver
beyond Moore’s Law39 forces the question of whether neural computing is
truly a paradigm that will permit exponential scaling going forward, or rather would it represent a “one-off” gain
of efficiency in some dimension, such
as power efficiency? While potentially
impactful, such a value proposition
would not represent a long-lasting
scaling capability. While to some the
distinction between these two futures may appear semantic, there is
a considerable difference. If neural
architectures indeed represent only a
one-time gain to accelerate a handful
of algorithms, then it perhaps merits
some consideration by specialized
communities. However, if neural computation were to actually represent a
scalable technology, it would justify a
significant research investment from
the trillion-dollar computing industry.
This article posits that new com-
putational paradigms that leverage
emerging neuroscience knowledge
represent a distinctly new foundation
for scaling computing technology go-
ing forward. Instead of relying on con-
tinual advances in miniaturization
of devices; neural computing is posi-
tioned to benefit from long-lasting in-
tellectual advances due to our parallel
gain of knowledge of the brain’s func-
tion (Figure 1). In effect, because the
materials science and chemistry of de-
vices has been extensively optimized,
we may achieve greater impact by
looking to the brain for neural inspira-
tion and hopefully achieve a continual
advancement of our neural comput-
ing capabilities through algorithmic
and architectural advances. Arguably,
the recent successes of deep artificial
neural networks (ANNs) on artificial
intelligence applications is a compel-
ling first step of this process, but the
perspective offered here will contend
that more extensive incorporation
of insights from brain will only con-
tinue to improve our computational
capabilities. This influence of neu-
will be capable of providing a long-
term scaling future. While non-silicon
approaches such as carbon nanotubes
or superconductivity may yield some
benefits, these approaches also face
theoretical limits that are only slightly
better than the limits CMOS is facing.
31
Somewhat more controversial, how-
ever, is the observation that require-
ments for computing are changing.
33, 39
In some respects, the current limits
facing computing lie beyond what the
typical consumer outside of the high-
performance computing community
will ever require for floating point
math. Data-centric computations such
as graph analytics, machine learning,
and searching large databases are in-
creasingly pushing the bounds of our
systems and are more relevant for a
computing industry built around mo-
bile devices and the Internet. As a re-
sult, it is reasonable to consider the
ideal computer is not one that is better
at more FLOPS, but rather one that is
capable of providing low-power com-
putation more appropriate for a world
flush with “big data.” While speed re-
mains an important driver, other con-
siderations—such as algorithmic capa-
bilities— are increasingly critical.
For these reasons, neural computing has begun to gain increased attention as a post-Moore’s Law technology. In many respects, neural
computing is an unusual candidate
to help extend Moore’s Law. Neural
computing is effectively an algorithmic and architectural change from
classic numerical algorithms on von
Neumann architectures, as opposed
to exploiting a novel material to supplant silicon. Further, unlike quantum computation, which leverages
different physics to perform computation, neural computing likely falls
within the bounds of classic computing theoretical frameworks. Whereas
quantum computation can point
to exponential benefits on certain
tasks such as Shor’s quantum algorithm for factoring numbers;
34 neural
computing architecture’s most likely
path to impact is through polynomial
trade-offs between energy, space, and
time. Such benefits can be explicitly
Figure 1. Moore’s Law has helped initiate a potential positive feedback loop between neural
data collection and improved computation.
Moore’s Law has enabled the miniaturization of sensors and improved
the analytics necessary to improve neural data collection. This increased
neural data has the potential to dramatically improve our ability to
extract knowledge from the brain and incorporate deeper brain-derived
capabilities into new algorithms and architectures; in turn furthering the
advances of computing technology.
N
e
u
ro
ns
Time Improved Sensors
and Data Analytics
New Neural
Theoretical
Frameworks
Novel Neural
Algorithms
Computing
Com
put
i
ng
Ca
pa
bi
lit
i
es
Time