behavior has long been known to be
temporally modulated and increasing characterization of “time cells”
is indicative that a more dynamical
view of hippocampal memory is likely
a better description of hippocampal
function and potentially more amenable to inspiring new algorithms.
Of course, developing neural-inspired dynamical memory and control
algorithms has the potential to greatly
advance these existing techniques,
but the real long-lasting benefit from
neural computing will likely arise
when neuroscience provides the capability to achieve higher-level cognition
in algorithms.
5. The unknown future: Cognitive
inference algorithms, self-organizing
algorithms and beyond. Not coincidentally, the description of these algorithms has been progressing from
the back of the brain toward the front,
with an initial emphasis on early
sensory cortices and eventually progressing to higher level regions like
motor cortex and the hippocampus.
While neural machine learning is
taking this back-to-front trajectory,
most of these areas have all received
reasonably strong levels of neuroscience attention historically—the hippocampus arguably is as well studied
as any cortical region. The “front” of
the brain, in contrast, has continually
been a significant challenge to neuroscientists. Areas such as the prefrontal cortex and its affiliated subcortical
structures like the striatum have remained a significant challenge from
a systems neuroscience level, in large
part due to their distance from the
sensory periphery. As a result, behavioral studies of cognitive functions
such as decision making are typically
highly controlled to eliminate any
early cortical considerations. Much
of what we know from these regions
originates from clinical neuroscience
studies, particularly with insights
from patients with localized lesions
and neurological disorders, such as
Huntington’s and Parkinson’s diseases.
As a result, it is difficult to envision
what algorithms inspired by prefron-
tal cortex will look like. One potential
direction are recent algorithms based
on deep reinforcement learning,
such as AlphaGo’s deep Q-learning,
While the extent to which the brain
is perfectly explained by this Bayesian
perspective is continually debated, it is
quite clear the brain does use higher-
level information, whether from mem-
ory, context, or across sensory modali-
ties, to guide perception of any sensory
modality. If you expect to see a cloud
shaped like a dog, you are more likely
to see one. The application of these
concepts to machine learning has been
more limited, however. There are cas-
es of non-neural computer vision al-
gorithms based on Bayesian inference
principles,
22 though it has been chal-
lenging to develop such models that
can be trained as easily as deep learn-
ing networks. Alternatively, other al-
gorithms, such as Recursive Cortical
Networks (RCNs),
13 Hierarchical Tem-
poral Memory (HTM),
2 and predictive
networks (PredNet)
24 have been de-
veloped that also leverage these top-
down inputs to drive network function.
These approaches are not necessarily
explicitly Bayesian in all aspects, but
do indicate that advances in this area
are occurring.
Ultimately, however, this area will
be enabled by increased knowledge
about how different brain areas interact with one another. This has long
been a challenge to neuroscientists,
as most experimental physiology work
was relatively local and anatomical
tracing of connectivity has historically
been sparse. This is changing as more
sophisticated physiology and connec-tomics techniques are developed. For
example, the recently proposed technique to “bar-code” neurons uniquely
could enable the acquisition of more
complete, global graphs of the brain.
20
Of course, the concept of Bayesian
information processing of sensory
inputs, like the previous two algorithmic frameworks described previously,
is skewed heavily toward conventional
machine learning tasks like classification. However, as our knowledge of
the brain becomes more extensive, we
can begin to take algorithmic inspiration from beyond just sensory systems. Most notable will be dynamics
and memory.
4. Dynamical memory and control
algorithms. Biological neural circuits
have both greater temporal and ar-
chitectural complexity than classic
ANNs. Beyond just being based on
spikes and having feedback, it is im-
portant to consider that biological
neurons are not easily modeled as
discrete objects like transistors, rath-
er they are fully dynamical systems ex-
hibiting complex behavior over many
state variables. While considering
biological neural circuits as complex
assemblies of many dynamical neu-
rons whose interactions themselves
exhibit complex dynamics seems in-
tractable as an inspiration for com-
puting, it is worth noting that there is
increasing evidence that it is possible
to extract computational primitives
from such neural frameworks, partic-
ularly when anatomy constraints are
considered. Increasingly, algorithms
like liquid state machines (LSMs)
25
have been introduced that abstractly
emulate cortical dynamics loosely by
balancing activity in neural circuits
that exhibit chaotic (or near chaotic)
activity. Alternatively, by appreciating
neural circuits as programmable dy-
namical systems, approaches like the
neural engineering framework (NEF)
have shown that complex dynamical
algorithms can be programmed to
perform complex functions.
10
While these algorithms have shown
that dynamics can have a place in
neural computation, the real impact
from the brain has yet to be appreciated. Neuroscientists increasingly
see regions like the motor cortex, cerebellum, and hippocampus as being
fundamentally dynamical in nature:
it is less important what any particular neuron’s average firing rate is, and
more important what the trajectory of
the population’s activity is.
The hippocampus makes a particularly interesting case to consider
here. Early models of the hippocampus were similar to Hopfield networks—memories were represented
as auto-associative attractors that
could reconstruct memories from a
partial input. These ideas were consistent with early place cell studies,
wherein hippocampal neurons would
fire in specific locations and nowhere
else. While a simple idea to describe,
it is notable how for roughly forty
years this idea has failed to inspire
any computational capabilities. However, it is increasingly appreciated
that the hippocampus is best considered from a dynamical view: place cell