the firm conclusion that the man who
fed it every morning in the farmyard
would continue to do so until the end
of times, with all his affection…
Falsificationism, by contrast, as set
forth mainly in the writings of Karl
Popper, considers in a rather pessimistic way that induction is not
possible; we cannot aspire to prove
the truth of any scientific theory; scientific hypotheses are no more than
mere conjectures that are provisionally accepted until a new experience
appears to refute them (what Popper
calls “falsification”). This stance is informed by a commendable skepticism
that has helped to give it credit among
scientists, too. But the truth is that, if
taken to its ultimate consequences
(beyond the point Popper himself
would have taken it), Falsificationism becomes absurd: scientists do
not devote themselves to formulating
and provisionally accepting whatever
theory, and then to looking for coun-terexamples that refute it.
On the contrary, scientists strive to
verify hypotheses as much as to refute
them, and they only accept hypotheses
that are reasonable from the start and
that have a huge explanatory power.
What this “reasonability” might be,
this “explanatory power,” or even the
“simplicity and elegance” that no
doubt have influenced great scientists
in the formulation of their hypotheses
and theories (consider Galileo, Newton, Einstein…), is an arduous problem
for the Philosophy of Science that cannot be addressed here. I only wish to
point out that neither Verificationism
nor Falsificationism can give a full account of the reality of scientific activity in all its magnitude. And that both,
considered as methodological stances,
refer to something that is beyond factual experience. Paying attention only to
empirical evidence is not acceptable,
especially if the consideration of
correctness of reasoning is set aside, since,
at least, empirical evidence must be adequately interpreted with good reasons.
Experimentation without the guide of
speculative thinking is worthless.
truth and Relevance
We have demonstrated that empiri-
cism is insufficient. There cannot
be a complete scientific activity that
consists solely of proving theories by
means of experiments: first, theories
must be formulated and developed,
and their explanatory power must be
demonstrated, so that the investment
of human and material resources in the
experiments, which may be very costly,
can be justified; then, the experiments
that will prove or refute the theories
must be carried out. Moreover, experi-
mental verification may say something
about the truth of a theory, but it can
say nothing about its relevance, that is,
its interest to the scientific community
or society as a whole.
Lessons from history
Having demonstrated that empiricism
is insufficient in and of itself, can we at
least say it is necessary That is, should
we consider it an essential part of every
scientific activity? From the scientific
point of view, is a purely speculative-theoretical work acceptable without
empirical support? In order to answer
this question, I will formulate another
one: What do we learn from history? In
experimentation
without the guide of
speculative thinking
is worthless.
particular, and to focus on the area of
major interest for the readers of this
magazine: Who are the founders of
computer science?
Consider some fundamental
names: Turing (computation theory
and programmable automata), von
Neumann (computer architecture),
Shannon (information theory), Knuth,
Hoare, Dijkstra, and Wirth (
programming theory and algorithmics), Feigenbaum and McCarthy (artificial
intelligence), Codd (relational model
of databases), Chen (
entity-relation-ship model), Lamport (distributed
systems), Zadeh (fuzzy logic), Meyer
(object-oriented programming), Gamma (design patterns), Cerf (Internet),
Berners-Lee (WWW)... Are their contributions perhaps distinguished by
their experimental character? Aren’t
they mainly, or even solely, speculative investigations (yet with enormous
possibilities for practical application),
whose fundamental merit has been to
light the way for the rest of the scientific community, by performing, so to
speak, a work of clarification and development of concepts Would they have
been able to publish their work according to the “experimentalistic” criteria
that currently prevail?
Having a look at the list of Turing
Awards1 or at the most cited computer
science papers in CiteSEER2 is very
instructive. However, given the current standards for reviewing, many of
those papers would never have been
published. They would have come up
against journal reviewers who would
have rejected such works, considering
them too speculative or theoretical, as
has been humorously described in fictitious reviews. 4
The attentive reader will have noticed that I am inductively justifying,
from the experience of history, that
many of the best works in computer
science (the most cited ones, to accept
the present identity between “most
cited” and “best,” which is of course a
very debatable one indeed) do not have
a fundamentally experimental character, but rather a theoretical and speculative one. Nevertheless, I am afraid
the “recalcitrant empiricist” will not
let him or herself be convinced even
by this argument…because, in the end,
his or her conviction is not grounded in
empirical arguments.