And the last thing would be autonomous AI systems whose behavior is
not directly controlled by a person. In
other words, they are designed not just
to do one particular task, but to make
decisions and adapt to different circumstances on their own.
How does the interplay work between
research and product?
There’s a group called Applied Machine Learning, or AML, that works
closely with FAIR and is a bit more on
the application side of things. That
group did not exist when I joined Facebook, but I pushed for its creation, because I saw this kind of relationship
work very well at AT&T. Then AML became a victim of its own success. There
was so much demand within the company for the platforms they were developing, which basically enabled all kinds
of groups within Facebook to use machine learning in their products, that
they ended up moving away from FAIR.
Recently we reorganized this a little
bit. A lot of the AI capability is now being moved to the product groups, and
AML is refocusing on the advanced
development of things that are close
to research. In certain areas like computer vision, there is a very, very tight
collaboration, and things go back and
forth really quickly. In other areas that
are more disruptive or for which there
is no obvious product, it’s more like,
‘let us work on it for a few years first’.
Let’s talk about unsupervised learning,
which, as you point out elsewhere, is
much closer to the way that humans ac-
of content, or if two people are likely to
be friends with one another.
What are some of the things going
on at FAIR that most interest or ex-
It’s all interesting! But I’m personally interested in a few things.
One is marrying reasoning with
learning. A lot of learning has to do with
perceptions, which are relatively simple
things that people can do without thinking too much. But we haven’t yet found
good recipes for training systems to do
tasks that require a little bit of reasoning. There is some work in that direction, but it’s not where we want it.
Another area that interests me is
unsupervised learning—teaching machines to learn by observing the world,
say by watching videos or looking at
images without being told what objects
are in these images.
DEEP LEARNING MIGHT be a booming
field these days, but few people remember its time in the intellectual
wilderness better than Yann LeCun,
director of Facebook Artificial Intelligence Research (FAIR) and a part-time
professor at New York University. LeCun developed convolutional neural
networks while a researcher at Bell
Laboratories in the late 1980s. Now,
the group he leads at Facebook is using them to improve computer vision,
to make predictions in the face of uncertainty, and even to understand natural language.
Your work at FAIR ranges from long-
term theoretical research to applica-
tions that have real product impact.
We were founded with the idea of
making scientific and technological
progress, but I don’t think the Facebook leadership expected quick results. In fact, many things have had
a fairly continuous product impact.
In the application domain, our group
works on things like text understanding, translation, computer vision, image understanding, video understanding, and speech recognition. There are
also more esoteric things that have had
an impact, like large-scale embedding.
This is the idea of associating every ob-
ject with a vector.
Yes. You describe every object
on Facebook with a list of numbers,
whether it’s a post, news item, photo,
comment, or user. Then, you use operations between vectors to see if, say,
two images are similar, or if a person is
likely to be interested in a certain piece
DOI: 10.1145/3178314 Leah Hoffmann
The Network Effect
The developer of convolutional neural networks
looks at their impact, today and in the long run.
[CONTINUED ON P. 119]