driven efforts are focused on using
technologies such as memristors to
emulate synapses. To some extent,
these approaches are seeking to create
general purpose neural systems in anticipation of eventual algorithm use;
but these approaches have had mixed
receptions due to their lack of clear
applications and the current success
of GPUs and analytics-specific accelerators like the TPU. It is reasonable to
expect that new generations of neural
algorithms can drive neuromorphic
architectures going forward, but the
parallel development of new strategies for neural algorithms with new
architecture paradigms is a continual
challenge. Similarly, the acceptance
of more modern neuroscience concepts by the broader machine learning community will likely only occur
when brain-derived approaches demonstrate an advantage that appeared
insurmountable using conventional
approaches (perhaps once the implications of Moore’s Law ending reach
that community); however, once such
an opportunity is realized, the deep
learning community is well-positioned
to take advantage of it.
One implication of the general disconnect between these very different
fields is that few researchers are sufficiently well versed across all of these
critical disciplines to avoid the some-times-detrimental misinterpretation
of knowledge and uncertainty from
one field to another. Questions such
as “Are spikes necessary?” have quite
different meanings to a theoretical
neuroscientist and a deep learning developer. Similarly, few neuroscientists
consider the energy implications of
complex ionic Hodgkin-Huxley dynamics of action potentials, however many
neuromorphic computing studies have
leveraged them in their pursuit of energy efficient computing. Ultimately,
these mismatches demand that new
strategies for bringing 21st century neuroscience expertise into computing be
explored. New generations of scientists
trained in interdisciplinary programs
such as machine learning and computational neuroscience may offer a
long-term solution; but in the interim,
it is critical that researchers on all sides
are open to the considerable progress
made these complex, well-established
domains in which they are not trained.
Acknowledgments
The author thanks Kris Carlson, Erik
Debenedictis, Felix Wang, and Cookie Santamaria for critical comments
and discussions regarding the manuscript. The authors acknowledge financial support from the DOE Advanced
Simulation and Computing program
and Sandia National Laboratories’
Laboratory Directed Research and Development Program. Sandia National
Laboratories is a multiprogram laboratory managed and operated by National
Technology and Engineering Solutions
of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc.,
for the U.S. Department of Energy’s National Nuclear Security Administration
under contract DE-NA0003525.
This article describes objective
technical results and analysis. Any
subjective views or opinions that
might be expressed do not necessarily
represent the views of the U.S. Department of Energy or the U.S. Government.
References
1. Agar wal, S. et al. Energy scaling advantages of
resistive memory crossbar based computation and its
application to sparse coding. Frontiers in Neuroscience 9.
2. Ahmad, S. and Hawkins, J. Properties of sparse
distributed representations and their application to
hierarchical temporal memory. arXiv:1503.07469.
3. Association, S.I. and Corporation, S.R. Rebooting the
IT Revolution: A Call to Action, 2015.
4. Bargmann, C. et al. BRAIN 2025: A scientific vision.
Brain Research Through Advancing Innovative
Neurotechnologies Working Group Report to the
Advisory Committee to the Director, NIH. U.S.
National Institutes of Health, 2014; http://www. nih.
gov/science/brain/2025/.
5. Bouchard, K.E. et al. High-performance computing in
neuroscience for data-driven discovery, integration,
and dissemination. Neuron 92, 3, 628–631.
6. Cepelewicz, J. The U. S. Government launches a
$100-million ‘Apollo project of the brain.’ Scientific
American.
7. Churchland, A.K. and Abbott, L. Conceptual and
technical advances define a key moment for
theoretical neuroscience. Nature Neuroscience 19, 3,
348–349.
8. Dennard, R.H., Gaensslen, F.H., Rideout, V.L., Bassous,
E. and LeBlanc, A.R. Design of ion-implanted
MOSFET’s with very small physical dimensions. IEEE
J. Solid-State Circuit 9, 5, 256¬–268.
9. Draelos, T.J. et al. Neurogenesis deep learning:
Extending deep networks to accommodate new
classes. In Proceedings of the 2017 International
Joint Conference on Neural Networks. IEEE,
526–533.
10. Eliasmith, C. and Anderson, C.H. Neural Engineering:
Computation, Representation, and Dynamics in
Neurobiological Systems. MIT Press, Cambridge,
MA, 2004.
11. Esmaeilzadeh, H., Blem, E., Amant, R.S.,
Sankaralingam, K. and Burger, D. Dark silicon and the
end of multicore scaling. In Proceedings of the 2011
38th Annual International Symposium on Computer
Architecture. IEEE, 365–376.
12. Gao, P. and Ganguli, S. On simplicity and complexity
in the brave new world of large-scale neuroscience.
Current Opinion in Neurobiology 32, 148–155.
13. George, D. et al. A generative vision model that trains
with high data efficiency and breaks text-based
CAPTCHAs. Science 358, 6368, eaag2612.
14. Goodfellow, I. et al. Generative adversarial nets.
Advances in Neural Information Processing Systems,
2014, 2672–2680.
15. Graves, A., Wayne, G. and Danihelka, I. Neural Turing
machines; arXiv:1410.5401.
16. Indiveri, G., Linares-Barranco, B., Hamilton, T.J.,
van Schaik, A., Etienne-Cummings, R., Delbruck,
T., Liu, S.-C., Dudek, P., Häfliger, P. and Renaud, S.
Neuromorphic Silicon Neuron Circuits. Frontiers in
Neuroscience, 5. 73.
17. Jiang, X. et al. Principles of connectivity among
morphologically defined cell types in adult neocortex.
Science 350, 6264, aac9462.
18. Jouppi, N.P. et al. Datacenter performance analysis
of a tensor-processing unit. In Proceedings of the
44th Annual International Symposium on Computer
Architecture. ACM, 2017, 1–12.
19. Kasthuri, N. et al. Saturated reconstruction of a
volume of neocortex. Cell 162, 3, 648–661.
20. Kebschull, J. M., da Silva, P.G., Reid, A. P., Peikon,
I. D., Albeanu, D. F. and Zador, A.M. High-throughput
mapping of single-neuron projections by sequencing
of barcoded RNA. Neuron 91, 5, 975–987.
21. Khan, M. M. et al. SpiNNaker: Mapping neural networks
onto a massively-parallel chip multiprocessor. In
Proceedings of the IEEE 2008 International Joint
Conference on Neural Networks. IEEE, 2849–2856.
22. Lake, B.M., Salakhutdinov, R. and Tenenbaum, J. B.
Human-level concept learning through probabilistic
program induction. Science 350, 6266, 1332–1338.
23. LeCun, Y., Bengio, Y. and Hinton, G. Deep learning.
Nature 521, 7553, 436–444.
24. Lotter, W., Kreiman, G. and Cox, D. Deep predictive
coding networks for video prediction and unsupervised
learning. arXiv:1605.08104.
25. Maass, W., Natschläger, T. and Markram, H. Real-time
computing without stable states: A new framework
for neural computation based on perturbations.
Neural Computation 14, 11, 2531–2560.
26. Marr, D. Simple memory: A theory for archicortex.
Philosophical Trans. Royal Society of London. Series B,
Biological Sciences. 23-81.
27. Merolla, P.A. et al. A million spiking-neuron integrated
circuit with a scalable communication network and
interface. Science 345, 6197, 668–673.
28. Milford, M.J., Wyeth, G.F. and Prasser, D., RatSLAM:
A hippocampal model for simultaneous localization
and mapping. In Proceedings of the 2004 IEEE
International Conference on Robotics and
Automation. IEEE, 403–408.
29. Mnih, V. et al. Human-level control through deep
reinforcement learning. Nature 518, 7540, 529–533.
30. Moore, G.E., Progress in digital integrated electronics.
Electron Devices Meeting, (1975), 11–13.
31. Nikonov, D.E. and Young, I.A. Overview of beyond-CMOS devices and a uniform methodology for their
benchmarking. In Proceedings of the IEEE 101, 12,
2498–2533.
32. Schemmel, J., Briiderle, D., Griibl, A., Hock, M.,
Meier, K. and Millner, S. A wafer-scale neuromorphic
hardware system for large-scale neural modeling. In
Proceedings of 2010 IEEE International Symposium
on Circuits and Systems. IEEE, 1947–1950.
33. Shalf, J. M. and Leland, R. Computing beyond Moore’s
Law. Computer 48, 12, 14–23.
34. Shor, P. W. Polynomial-time algorithms for prime
factorization and discrete logarithms on a quantum
computer. SIAM Review 41, 2, 303–332.
35. Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever,
I. and Salakhutdinov, R. Dropout: A simple way to
prevent neural networks from overfitting. J. Machine
Learning Research 15, 1, 1929–1958.
36. Stevenson, I.H. and Kording, K.P. How advances
in neural recording affect data analysis. Nature
Neuroscience 14, 2, 139–142.
37. Thompson, S. E. and Parthasarathy, S. Moore’s Law:
The future of Si microelectronics. Materials Today 9,
6, 20–25.
38. Waldrop, M.M. The chips are down for Moore’s Law.
Nature News 530, 7589, 144.
39. Williams, R.S. and DeBenedictis, E.P. OSTP
Nanotechnology-Inspired Grand Challenge: Sensible
Machines (ext. ver. 2. 5).
40. Yuste, R. From the neuron doctrine to neural
networks. Nature Reviews Neuroscience 16, 8,
487–497.
James B. Aimone ( jbaimon@sandia.gov) is Principal
Member of Technical Staff at the Center for Computing
Research, Sandia National Laboratories, Albuquerque,
NM, USA.
Copyright held by author/owner.