logic.” It was logical that a machine
organized like the brain would be
good with logic! McCulloch and Pitts
established the foundation for future
artificial neural networks.
In 1957, Frank Rosenblatt demonstrated the first artificial neural network machine for the U.S. Navy. He
called it the Perceptron. It was a single-layer machine as illustrated schematically in the accompanying figure using
photocells as input receptors organized
as two-dimensional array. The Perceptron was able to recognize handwritten
digits 0 through 9. The figure also outlines a genealogy of the neural network
descendants of the Perceptron; we provide it for information, but we will not
discuss all its branches here.
Q: Neural networks are good for map-
ping input patterns into output pat-
terns. What does this mean?
A pattern is a very long sequence of
bits, for example, the megabits making up an image or gigabits representing a person’s viewing history of films.
A recognizer or classifier network maps
a pattern into another that has meaning
to humans. A recognizer network can,
MACHINE LEARNING HAS evolved from an out-of- favor subdiscipline of computer science and artificial intelligence (AI)
to a leading-edge frontier of research
in both AI and computer systems architecture. Over the past decade investments in both hardware and software
for machine learning have risen at an
exponential rate matched only by similar investments in blockchain technology. This column is a technology check
for professionals in a Q&A format on
how this field has evolved and what big
questions it faces.
Q: The modern surge in AI is powered
by neural networks. When did the neu-
ral network field start? What was the
A. The early 1940s was a time of
increasing attention to automatic
computers. At the time, a “computer”
was a professional job title for humans and computation was seen as a
human intelligent activity. Some believed that the logical computations
of the brain were made possible by
the neuronal structure of the brain.
In 1943 Warren McCulloch and Walter Pitts wrote a famous proposal to
build computers whose components
resembled neurons. 4 Each neuron received inputs from many others and
delivered its outputs to many others. Inputs had weights and when
the weighted input sum exceeded a
threshold the neuron switched from
the 0 to the 1 state. They wrote: “
Because of the ‘all-or-none’ character
of nervous activity, neural events
and the relations among them can
be treated by means of propositional
The Profession of IT
A discussion of the rapidly evolving realm of machine learning.
computation to train
a large network on
a large training set.