I
N
F
O
G
R
A
P
H
I
C
C
O
U
R
T
E
S
Y
O
F
D
A
R
P
A’
S
E
L
E
C
T
R
O
N
I
C
S
R
E
S
U
R
G
E
N
C
E
I
N
I
T
I
A
T
I
V
E
Lifelong Learning
in Artificial Neural Networks
New methods enable systems to rapidly, continuously adapt.
with labeled examples. This training
is most often done via a method called
backpropagation, in which the system calculates an error at the synaptic
output and distributes it backward
throughout the networks layers. Most
deep learning systems today, including Miconi’s test systems, use backpropagation via gradient descent, an
optimization technique.
Using that as a starting point, Mi-
coni employs an idea called Hebbian
learning, introduced in 1949 by neu-
ro-psychologist Donald Hebb, who
observed that two neurons that fire re-
peatedly across a synapse strengthen
their connection over time. It is often
summarized as, “Neurons that fire to-
gether, wire together.”
With this “Hebbian plasticity,”
These applications employ large
artificial neural networks, in which
nodes are linked by millions of weight-
ed interconnections. They mimic
the structure and workings of living
brains, except in one key respect—
they don’t learn over time, as animals
do. Once designed, programmed, and
trained by developers, they do not
adapt to new data or new tasks with-
out being retrained, often a very time-
consuming task.
Real-time adaptability by AI systems has become a hot topic in research. For example, computer scientists at Uber Technologies last year
published a paper that describes a
method for introducing “plasticity”
in neural networks. In several test
applications, including image recognition and maze exploration, the
researchers showed that previously
trained neural networks could adapt
to new situations quickly and efficiently without undergoing additional training.
“The usual method with neural
networks is to train them slowly, with
many examples; in the millions or
hundreds of millions,” says Thomas
Miconi, the lead author of the Uber
paper and a computational neurosci-
entist at Uber. “But that’s not the way
we work. We learn fast, often from a
single exposure, to a new situation or
stimulus. With synaptic plasticity, the
connections in our brains change au-
tomatically, allowing us to form mem-
ories very quickly.”
For more than 60 years, neural
networks have been built from in-
terconnected nodes whose pair-wise
strength of connection is determined
by weights, generally fixed by training
Science | DOI: 10.1145/3323685 Gary Anthes
The DARPA Lifelong Learning Machines (L2M) Program seeks to develop learning systems
that continuously improve with additional experience, and rapidly adapt to new conditions
and dynamic environments.
Summary of General L2M Framework
“In a few years,
much of what we
consider AI today
won’t be considered
AI without lifelong
learning.”