evaluating their contribution. The final
straw came when someone published a
computer program for doing the work
of a committee by counting the papers
and computing a score. The fact that
such simple programs would often
get the same result as the committee
showed me that committee members
were not doing their jobs. For example,
referees of an individual paper cannot detect an author that publishes
the same results several times using
different titles and wording. We have
scientists on the evaluation committees precisely because they have the expertise to read the papers and evaluate
the contribution made by the author. If
they don’t do that, we don’t need them.
Sometimes a single paper is a far more
important contribution than a dozen
shallow or repetitive papers. Simply
counting papers is not enough.
I have observed that people being
evaluated for appointments or grants
learn how to “play the game.” If they
see that they will be evaluated by people who won’t read the papers but just
count them, they know how to increase
their score without actually improving
the contribution. My 2007 paper discussed some techniques that researchers use to make themselves look better
than they are.
1. Landwehr, C., Ludewig, J., Meersman, R., Parnas,
D. L., Shoval, P., Wand, Y., Weiss, D., and Weyuker,
E. Software systems engineering programmes: A
capability approach. J. of Systems and Software 125,
2. Parnas, D.L. Information distributions aspects of
design methodology. In Proceedings of IFIP Congress
’ 71, Booklet TA- 3, 1971, 26–30.
3. Parnas, D.L. On the criteria to be used in decomposing
systems into modules. Commun. ACM 15, 12 (Dec.
4. Parnas, D. L., Clements, P. C., and Weiss, D. M. The
modular structure of complex systems. IEEE
Transactions on Software Engineering SE- 11, 3 (Mar.
5. Parnas, D.L. Software aspects of strategic defense
systems. Commun. ACM 28, 12 (Dec. 1985),
6. Parnas, D.L. Education for computing professionals.
IEEE Computer 23, 1 (Jan. 1990), 17–22.
7. Parnas, D.L. Software engineering programs are not
computer science programs. Annals of Software
Engineering 6 (1998), 19–37. Reprinted in IEEE
Software (Nov./Dec. 1999), 19–30.
8. Parnas, D. L. Stop the numbers game. Commun. ACM
50, 11 (Nov. 2007), 19–21.
9. Parnas, D.L. The real risks of artificial intelligence.
Commun. ACM 60, 10 (Oct. 2017), 27–31.
Peter J. Denning ( email@example.com) is Distinguished
Professor of Computer Science and Director of the
Cebrowski Institute for information innovation at the
Naval Postgraduate School in Monterey, CA, is Editor
of ACM Ubiquity, and is a past president of ACM.
The author’s views expressed here are not necessarily
those of his employer or the U.S. federal government.
Copyright held by author.
The graduate capabilities list is intended as a checklist for those teaching software development. They
should be asking, “Will our graduates
have these capabilities?” The answer
should be: “Yes, all of them.” If not,
the institutions should be redesigning
their programmes so that they can answer “Yes!”
Do you advocate that SE and CS educa-
tion would both be better if they were
The two are as distinct as physics and
mechanical engineering. The physics
taught in both programmes would overlap but the engineers will be taught how
to use the material to build reliable products while the physics majors are taught
how to add to the body of knowledge that
constitutes the science.
Professional programmes tend to
be more tightly constrained than science programs because there are many
things that a professional must know to
be licensed and allowed to practice. A
science student is often allowed to make
more choices and become a specialist.
It is difficult (though not impossible) to have both types of programmes
in one department.
You have said a professional software
engineering programme would appeal
to the students who want to learn how
to build things for others to use. Are CS
departments out of tune with most of
The CS departments I have visited
have a diverse set of students. Some
want to be developers, while others want
to be scientists. Many departments offer a compromise programme that is far
from ideal for either group. That is why
I prefer two distinct programmes taught
by different (though not necessarily disjoint) sets of faculty members.
In 1985, you took a strong stand against
the U.S. strategic defense initiative
(SDI), 5 which promised to build an automated ballistic missile defense (BMD)
system that would allow the U.S. to abandon its intercontinental ballistic missiles (ICBMs).You maintained the software could not be trusted enough for the
U.S. to eliminate its missiles. We have
BMD systems today, were you wrong?
Not at all! SDI was predicted by its
Do you see a relationship between the
advocates to be ready in six years and
capable of intercepting and destroying
sophisticated missiles including newer
designs designed to defeat a BMD sys-
tem by taking evasive measures. The
system described by President Reagan
would have been impossible to test
under realistic conditions. The BMD
systems in use today ( 33 years later)
are not reliable even when facing unso-
phisticated rockets. No ICBM systems
have been dismantled because BMD
systems cannot be trusted.
BMD claims made in the 1980s and to-
day’s claims about artificial intelligence?
Both fields are characterized by hyperbolic claims, overly optimistic predictions, and a lack of precise definitions. Both will produce systems that
cannot be trusted. 9
In 2007, you published a short paper8
that criticized the evaluation of research-
ers by the number of papers they publish.
What led you to publish such a paper?
I have served on many committees
that evaluate faculty members for promotion and many others that evaluate research proposals. All too often, I
have been disappointed to learn that
most of my fellow committee members
had not read any of the applicant’s papers. They had merely counted the papers and (sometimes) estimated the
selectivity of the journals and conferences. On two occasions colleagues
complained when I started to discuss
problems in the applicant’s papers
(which I had read). They said that the
referees had already read the papers
and approved them so I had no right to
evaluate them. In effect, they said I was
“out of line” in reading the papers and
Noting that these
we chose to specify
what the graduates
should be able to do