ular pastime, but much software has
become better over the last decade,
exactly as it improved in previous decades. Unfortunately, the improvements have come at tremendous cost
in terms of human effort and computer resources. Basically, we have
learned how to build reasonably reliable systems out of unreliable parts
by adding endless layers of runtime
checks and massive testing. The structure of the code itself has sometimes
changed, but not always for the better.
Often, the many layers of software and
the intricate dependencies common
in designs prevent an individual—
however competent—from fully understanding a system. This bodes ill
for the future: we do not understand
and cannot even measure critical aspects of our systems.
There are of course system builders who have resisted the pressures to
build bloated, ill-understood systems.
We can thank them when our computerized planes don’t crash, our phones
work, and our mail arrives on time.
They deserve praise for their efforts to
make software development a mature
and trustworthy set of principles, tools,
and techniques. Unfortunately, they
are a minority and bloatware dominates most people’s impression and
thinking.
Similarly, there are educators who
have fought the separation of theory
and industrial practice. They too deserve praise and active support. In fact,
every educational institution I know
of has programs aimed at providing
practical experience and some professors have devoted their lives to making successes of particular programs.
However, looking at the larger picture,
I’m unimpressed—a couple of projects
or internships are a good start, but not
a substitute for a comprehensive approach to a balanced curriculum. Preferring the labels “software engineering” or “IT” over “CS” may indicate
differences in perspective, but problems have a nasty way of reemerging in
slightly different guises after a move to
a new setting.
My characterizations of “industry”
and “academia” border on caricature,
but I’m confident that anyone with a
bit of experience will recognize parts of
reality reflected in them. My perspective is that of an industrial researcher
many organizations
that rely critically
on computing
have become
dangerously low
on technical skills.
and manager ( 24 years at AT&T Bell
Labs, seven of those as department
head) who has now spent six years in
academia (in a CS department of an
engineering school). I travel a lot, having serious discussions with technical
and managerial people from several
dozen (mostly U.S.) companies every
year. I see the mismatch between what
universities produce and what industry
needs as a threat to both the viability of
CS and to the computing industry.
The Academia/industry Gap
So what can we do? Industry would prefer to hire “developers” fully trained in
the latest tools and techniques whereas academia’s greatest ambition is to
produce more and better professors.
To make progress, these ideals must
become better aligned. Graduates going to industry must have a good grasp
of software development and industry must develop much better mechanisms for absorbing new ideas, tools,
and techniques. Inserting a good
developer into a culture designed to
prevent semi-skilled programmers
from doing harm is pointless because
the new developer will be constrained
from doing anything significantly new
and better.
Let me point to the issue of scale.
Many industrial systems consist of millions of lines of code, whereas a student can graduate with honors from
top CS programs without ever writing
a program larger than 1,000 lines. All
major industrial projects involve several people whereas many CS programs
value individual work to the point of
discouraging teamwork. Realizing this,
many organizations focus on simplifying tools, techniques, languages, and
operating procedures to minimize the
reliance on developer skills. This is
wasteful of human talent and effort because it reduces everybody to the lowest common denominator.
Industry wants to rely on tried-and-true tools and techniques, but is also
addicted to dreams of “silver bullets,”
“transformative breakthroughs,” “
killer apps,” and so forth. They want to be
able to operate with minimally skilled
and interchangeable developers guided by a few “visionaries” too grand to
be bothered by details of code quality.
This leads to immense conservatism
in the choice of basic tools (such as
programming languages and operating systems) and a desire for monocultures (to minimize training and
deployment costs). In turn, this leads
to the development of huge proprietary
and mutually incompatible infrastructures: Something beyond the basic
tools is needed to enable developers
to produce applications and platform
purveyors want something to lock in
developers despite the commonality
of basic tools. Reward systems favor
both grandiose corporate schemes and
short-term results. The resulting costs
are staggering, as are the failure rates
for new projects.
Faced with that industrial reality—
and other similar deterrents—
academia turns in on itself, doing what it
does best: carefully studying phenomena that can be dealt with in isolation by
a small group of like-minded people,
building solid theoretical foundations,
and crafting perfect designs and techniques for idealized problems. Proprietary tools for dealing with huge code
bases written in archaic styles don’t fit
this model. Like industry, academia
develops reward structures to match.
This all fits perfectly with a steady improvement of smokestack courses in
well-delineated academic subjects.
Thus, academic successes fit industrial
needs like a square peg in a round hole
and industry has to carry training costs
as well as development costs for specialized infrastructures.
Someone always suggests “if industry just paid developers a decent
salary, there would be no problem.”
That might help, but paying more for
the same kind of work is not going to
help much; for a viable alternative, industry needs better developers. The
idea of software development as an