do a job better, but they also offer new
ways of looking at the data.”
The view into the future is equally
compelling. Lanusse says that in the
coming years neural networks will drive
enormous advances in fields beyond as-
trophysics. These systems will not only
detect, recognize, and classify objects,
they will understand what is taking place
in an image or in a scene in real time.
This, of course, could profoundly impact
everything from the way autonomous ve-
hicles operate to how medical diagnos-
tics work. Ultimately, they will help us
unlock the mysteries of our planet and
the universe. They will deliver a level of
understanding that wouldn’t have been
imaginable only a few years ago.
Says Lanusse, “Computer image
recognition is advancing rapidly. We
are finding ways to train networks
faster and better. Every gain in speed
and accuracy of even a few percent
makes a profound difference in the
Nguyen, A., Yosinski, J., Bengio, Y.,
Dosovitskiy, A., and Clune, J.
Plug & Play Generative Networks:
Conditional Iterative Generation of Images
in Latent Space. Computer Vision and
Pattern Recognition (CVPR ‘ 17), 2017.
Lanusse, F., Quanbin, M, Li, N., Collett, T.E., Li,
C., Ravanbakhsh, S., Mandelbaun, R.,
and Poczos, B.
CMU DeepLens: Deep Learning for
Automatic Image-based Galaxy-Galaxy
Strong Lens Finding. March 2017.
Wang, K., Guo, P., Luo, A., Xin, X., and Duan, F.
Deep neural networks with local
connectivity and its application to
astronomical spectral data.
2016 IEEE International Conference
on Systems, Man, and Cybernetics (SMC),
Budapest, 2016, pp. 002687-002692.
Goodfellow, I.J., Pouget-Abadie, J.,
Mirza, M., Xu, B., Warde-Farley, D., Ozair, S.,
Courville, A., and Bengio, Y.
Generative Adversarial Networks.
June 2014. eprint arXiv:1406.2661.
Samuel Greengard is an author and journalist based in
West Linn, OR.
© 2017 ACM 0001-0782/17/09 $15.00
that the discriminator, referred to as a
generative adversarial net (GAN), learns
over time what matters most in the im-
age, Zhang says. At a certain point, the
system displays almost human-like in-
tuition, he says; “results improve sig-
nificantly.” Interestingly, this approach
not only improves the quality of image
detection, it may also trim the time re-
quired to train a network by reducing
the number of images—essentially the
volume of data—required to obtain
useful results. Says Zhang: “An interest-
ing question is how can we lower the re-
quirement of a neural network in terms
of how much data it needs to achieve
the current level of quality?”
Another step is to make today’s ar-
tificial neural nets easier to use. The
technology is still in its infancy and
researchers often struggle to use tools
and technology effectively. In some
cases, they have to work with multiple
nets in an iterative fashion to find one
that works best. As a result, Zhang has
developed a software program, ease.ml,
that configures deep learning neural
networks in a more automated and ef-
ficient way. This includes optimizing
components such as CPUs, GPUs, and
FPGAs and providing a declarative lan-
guage for better managing algorithms.
“Right now, the user needs to deal
with a lot of different decisions, including the type of neural net they want to
use. There may be 20 different neural
nets available for the same task. Choosing the right model and reducing complexity is important,” he explains.
Already, the software, combined
with other deep learning techniques—
including an algorithm called ZipML
that reduces data representation without reducing accuracy—has cut noise
and sharpened images significantly for
the astrophysics group at ETH Zurich.
As a result, Schawinski and others can
now peer more deeply into the universe.
“Unlike other areas of science, we
cannot run experiments in a lab and
simply analyze the results,” ETH Zurich
explains. “We are dependent on tele-
scopes and images to look back in time.
We have to piece together all these fixed
snapshots—essentially huge data-
sets—to gain insight and knowledge.”
Adds Lanusse: “Classical methods
of astronomy and astrophysics are rap-
idly being superseded by data science
and machine learning. They not only
do what they are
expected to do,
according to a
“My main focus is on developing
verification techniques and
model checking for probabilistic
systems, which ensure software,
systems, hardware, and
protocols behave correctly.”
Kwiatkowska has held a
statutory chair in the Department
of Computer Science at Oxford,
and a professorial fellowship at
the University’s Trinity College,
since 2007. Prior to that, she
was a professor in the School
of Computer Science at the
University of Birmingham, a
lecturer at the University of
Leicester, and an assistant
professor at Jagiellonian
University in Krakow, Poland.
She earned an undergraduate
degree in computer science at
Jagiellonian University, writing
programs on punch cards in
PASCAL. Kwiatkowska then
earned a master’s degree
from Oxford, and a Ph.D. in
computer science from the
University of Leicester.
Initially her research interests
centered on concurrent and
distributed systems, but in 1995
Kwiatkowska started working
on verification techniques.
Her research covers a range of
applications including biological
systems, DNA computations,
and analyzing the behavioral
correctness of pacemakers,
Kwiatkowska now studies
autonomous systems and the
application of verification
techniques to robotics. “We need
to develop methods to verify
the correctness of the behavior
of robots,” she says. “I am
also looking at verification for
machine learning, specifically
neural networks, which are
now being used in perception
algorithms for self-driving cars.”
— John Delaney