There are a lot of potential scenarios
we can imagine in the future where
machines assist and augment humans’
work, rather than replacing it.
You’ve also been vocal about the need
to include a more diverse set of voices
in computer science and AI research.
If we believe machine values represent human values, we need to believe
we have fully represented humanity as
we develop and deploy our technology.
So it’s important to encourage students
of diverse backgrounds to participate
in the field. It’s also important, at this
moment, to recognize the social impact of technology is rising. The stakes
are higher than ever, and we also need
to invite future business leaders, policymakers, humanists, social scientists
of diverse backgrounds to be technologically literate, to interact with the
tech world, and to bring that diverse
thinking into the process.
Can you tell me about Stanford’s new
AI4All program for high school students, which grew out of the earlier
Stanford Artificial Intelligence Laboratory’s Outreach Summer Program
(SAILORS)?
AI4All aims to increase diversity
in the field of artificial intelligence
by targeting students from a range of
financial and cultural backgrounds.
It’s a community we feel very proud of
and are very proud to support. One of
our earliest SAILORS alumna, a high
school student named Amy Jin, continued working in my lab on videos for
surgical training. Then, while still in
high school, she authored a research
paper with my team that was selected
by the 2017 Machine Learning for
Health Workshop’s Neural Information Processing Systems (NIPS) conference, one of the best-respected events
in the field. What’s more, out of 150
papers, she won the award for best paper. We also have students who started
robotics labs at their schools and hold
girl-centered hackathons. Many of
them are focusing on applications that
put AI to good social use, from optimizing ambulance deployment to cancer
research and cyberbullying.
Leah Hoffmann is a technology writer based in Piermont,
N Y, USA.
© 2019 ACM 0001-0782/19/3 $15.00
in the 50s and 60s, when scientists
found neurons are layered together
and send information in a hierarchical
way. In the meantime, cognitive science has always been an essential part
of guiding AI’s quest for different kind
of tasks. Many computer scientists
were inspired to work on object recognition, for example, because of the
work cognitive scientists had done.
One of your current interdisciplinary
collaborations is a neural network that
implements curiosity-driven learning.
Human babies learn by exploring the world. We are trying to create
algorithms that bear those kinds of
features—where computers go where
they go out of curiosity rather than being trained on traditional tasks like labeled images.
You have spoken before about the need
to think about AI from a humanistic
and not just a technical perspective,
and you just helped launch Stanford’s
Human-Centered AI Initiative (HAI).
Can you talk about your goals?
We want to create an institute that
works on technologies to enhance human capabilities. In the case of robotics, machines can do things humans
cannot. Machines can go to dangerous
places. They can dive deeper in water and dismantle explosive devices.
Machines also have the kind of preci-
sion and strength humans do not. But
humans have a lot more stability and
understanding, and we have an easier
time collaborating with one another.
tant pro-
fessor, along with many other people
in the field. During that era, there
was a huge effort to design machine
learning models that could recog-
nize objects. We also had to find
sensible ways to benchmark their
performance. And there were some
very good datasets, but in general
they were relatively small, with only
one or two dozen different objects.
When datasets are small, it limits
the type of models that can be built,
because there’s no way to train algorithms to recognize the variability even
of a single object like “cat.”
People were making progress in that
era, but the field was a little bit stuck,
because the algorithms were unsatisfying. So around 2006, my students and
I started to think about a different way
of approaching the object recognition
problem. We were thinking that instead of designing models that overfit on a small dataset, we would think
about very large-scale data, like millions and millions of objects, and that
would drive machine learning models
in a whole different direction.
So you started working on ImageNet,
which seemed crazy at the time.
Our goal was to map out all the
nouns in the English language, then
collect hundreds of thousands of pictures to depict the variability of each
object, like an apple or a German
Shepherd. We ended up downloading
and sifting through at least a billion
pictures or more, and we eventually
put together ImageNet though crowd-sourcing. That dataset was 15 million
images and 22,000 object categories.
In your research at Stanford’s Vision
and Learning Lab, you work closely not
just with technologists, but also with
neuroscientists. Can you tell me a bit
about how that collaboration works?
Fundamentally, AI is a technical
field. Its ultimate goal is to enable machine intelligence. But because human
intelligence is so closely related to this
field, it helps to have a background and
collaborators in neuroscience and cognitive science. Take today’s deep learning revolution. The algorithms we use
today in neural networks were inspired
by classic studies of neuroscience back
[CONTINUED FROM P. 120]
“Our goal was to map
out all the nouns in
the English language,
then collect ...
pictures to depict
the variability
of each object,
like an apple or
a German Shepherd.”