DOI: 10.1145/2398356.2398359
Computer science is Not a science
To the question Vinton G. Cerf addressed in his Presi- dent’s Letter “Where Is the Science in Computer Sci- ence?” (Oct. 2012), my first
answer would be that there isn’t any.
Max Goldstein, a mentor of mine at
New York University, once observed
that anything with “science” in its name
isn’t really a science, whether social,
political, or computer. A true science
like physics or chemistry studies some
aspect of physical reality. It is not concerned with how to build things; that
is the province of engineering. Some
parts of computer science lie within
mathematics, but mathematics is not a
science and is rarely claimed to be one.
What we mislabel as computer science would more aptly be named “
com-putology”—the study of computational
processes and the means by which they
can be realized. Its components can
broadly be grouped into three interdependent areas: software engineering, hardware engineering, and the
mathematics of computation. Just as
the underlying discipline of chemical
engineering is chemistry, the underlying discipline of software engineering
is mathematics.
But not so fast. To qualify as a subject
of science, a domain of inquiry needs
two qualities: regularity and physicality. Reproducible experiments are at the
heart of the scientific method. Without
regularity they are impossible; without physicality they are meaningless.
Digital computers, which are really just
very large and complicated finite-state
machines, have both these qualities.
But digital computers are artifacts, not
part of the natural world. One could
argue either way whether that should
disqualify them as a subject of science.
Quantum computing and race conditions complicate the picture but not in
a fundamental way.
None of this detracts from Cerf’s
essential point—that when we design
software we rarely understand the full
implications of our designs. As he said,
it is the responsibility of the comput-
ing community, of which ACM is a vital
part, to develop tools and explore prin-
ciples that further that understanding
and enhance our ability to predict the
behavior of the systems we build.
In his President’s Letter (Oct. 2012),
Vinton G. Cerf wrote: “We have a responsibility to pursue the science in
computer science […and to develop] a
far greater ability to make predictions
about the behavior of these complex,
connected, and interacting systems.”
This is indeed a worthwhile cause that
would likely increase the reliability and
trustworthiness of the whole field of
computing. But, having picked up the
gauntlet Cerf threw down, how do I
make that cause fit the aspects of computer science I pursue every day?
Cerf discussed the problems software developers confront predicting
the behavior of both software systems
and the system of people developing
them. As a professional developer, I
have firsthand experience. Publishing
a catalog of the issues I find might lead
analysts to identify general problems
and suggest mitigations would be subject to two limitations: probably not
interesting enough for journal editors
to want to publish and my employers
likely viewing its content as commercially sensitive.
I could instead turn to the ACM
Digital Library and similar resources,
looking for ways to apply it to my professional work. However, this also has
limitations; reading journal articles
is a specialized, time-consuming art,
and the guidance I would need to understand what and how results are relevant is often not available. Many of the
“classic results” quoted by professionals turn out to be as verifiable as leprechaun sightings. 1
To the extent the creation of soft-
ware can be seen as “computer sci-
ence,” such creation is today two
distinct fields: creating software and
researching ways software can be cre-
ated. If we would accept the responsi-
bility Cerf has bestowed upon us, we
would have to create an interface disci-
pline—call it “computer science com-
munication”—between these fields.
Reference
1. bossavit, l. The Leprechauns of Software Engineering.
leanpub, Vancouver, b.C., 2012; https://leanpub.com/
leprechauns
only Portfolios Mitigate Risk Well
Peter G. Neumann’s “Inside Risks”
Viewpoint “The Foresight Saga, Redux”
(Oct. 2012) addressed how to provide
security but fell short. Though security
requires long-term approaches and research advances, traditional incentives
target quick rewards. I teach a graduate
course on IT strategy and policy largely
focused on this dilemma. When technology moved slowly, slow acquisition
and delayed delivery caused minor loss-es. Now, however, along with improvement due to technology innovation, delays in exploiting advanced technology
incur exponentially increased opportunity costs. Most businesses cannot
wait for high-trust solutions or systems
that significantly surpass state-of-the-art quality. Likewise, most government
systems are already too costly and too
late, in part because they try to address
an unreasonably large number of requirements.
The risk-management problem necessitates a portfolio-management approach. In the context of IT systems for
business or government, it would be
more affordable and practical to create multiple alternatives and fallback
options and not depend on a single
system where failure would be devastating. In addition, applications should be
separated from research and funded appropriately. It would be great to have a
secure Internet, unbreakable systems,
and uniformly trained people, but such
goals are not practical today. The focus
should instead be on risk mitigation,
resilience, and adaptation, even though
the incentives for moving quickly are
often irresistible. “Ideal” systems are
indeed the enemy of practical portfolios
built to withstand a range of risks.
Rick Hayes-Roth, Monterey, CA