(See Sidebar “On the Use of Complexity
Learning. We have already mentioned the concept of learning when referring to learning from a teacher versus learning from a book. Recall that
complexity theory provides evidence
to the advantage of the former. This
is in the context of gaining knowledge
about publicly available information.
In contrast, computational learning
theory is concerned with learning objects that are only partially available to
the learner (i.e., reconstructing a function based on its value at a few random
locations or even at locations chosen
by the learner). Still, complexity theory
sheds light on the intrinsic limitations
of learning (in this sense).
Other computational tasks.
Complexity theory deals with a variety of
computational tasks. We have already
mentioned two fundamental types of
tasks: searching for solutions (or rather
finding solutions) and making decisions (e.g., regarding the validity of assertions). We have also hinted that in
some cases these two types of tasks can
be related. Now consider two additional
types of tasks: counting the number of
solutions and generating random solutions. Clearly, both are at least as hard
as finding arbitrary solutions to the corresponding problem, but it turns out
that for some natural problems they are
not significantly harder. Specifically,
under some natural conditions on the
problem, approximately counting the
number of solutions and generating an
approximately random solution is not
significantly harder than finding an arbitrary solution.
Approximation. Having mentioned
the notion of approximation, the study
of the complexity of finding “
approximate solutions” is also of natural importance. One type of approximation
problems refers to an objective function defined on the set of potential solutions: Rather than finding a solution
that attains the optimal value, the approximation task consists of finding a
solution that attains an “almost optimal” value, where the notion of almost
optimal may be understood in different
ways giving rise to different levels of
approximation. Interestingly, in many
cases, even a very relaxed level of approximation is as difficult to obtain as
solving the original (exact) search problem (i.e., finding an approximate solution is as hard as finding an optimal
solution). Surprisingly, these hardness
of approximation results are related
to the study of probabilistically checkable proofs, which are proofs that allow
for ultra-fast probabilistic verification.
Amazingly, every proof can be efficiently transformed into one that allows
for probabilistic verification based on
probing a constant number of bits (in
the alleged proof). Turning back to approximation problems, in other cases
a reasonable level of approximation is
easier to achieve than solving the original (exact) search problem.
Average-case complexity. Approximation is a natural relaxation of various computational problems. Another
natural relaxation is the study of average-case complexity, where the “
average” is taken over some “simple” distributions (representing a model of the
problem’s instances that may occur in
practice). Although it was not stated
explicitly, the entire discussion so far
has referred to “worst-case” analysis of
algorithms. Worst-case complexity is a
more robust notion than average-case
complexity. For starters, one avoids
the controversial question of what are
the instances that are “important in
practice” and correspondingly the selection of the class of distributions for
which average-case analysis is to be
conducted. Nevertheless, a relatively
robust theory of average-case complexity has been suggested, albeit it is less
developed than the theory of worst-case complexity.
Randomness extractors. In view of
the central role of randomness in com-
plexity theory (as evident, say, in the
study of pseudorandomness, proba-
bilistic proof systems, and cryptogra-
phy), one may wonder as to whether the
randomness needed for the various ap-
plications can be obtained in real life.
One specific question, which received a
lot of attention, is the possibility of “pu-
rifying” randomness (or “extracting
good randomness from bad sources”).
That is, can we use “defective” sources
of randomness in order to implement
almost perfect sources of randomness.
The answer depends, of course, on the
model of such defective sources. This
study turned out to be related to com-
plexity theory, where the tightest con-
nection is between a certain type of
randomness extractor and a certain
type of pseudorandom generator.
In case you feel dizzy, it is no wonder.
We took an ultra-fast aerial tour of
some mountain tops, and dizziness is
to be expected. More leisurely paced
touring experiences are probably offered in courses given by your university.
The P versus NP question and NP-completeness are
covered in many basic textbooks. I’ve written about
this subject in P, NP, and NP-Completeness: The
Basics of Complexity Theory, published by Cambridge
University Press. If you are interested in advanced
topics, Computational Complexity: A conceptual
perspective (Cambridge University Press, 2008) is more
Oded Goldreich is a professor of computer science at the
Weizmann Institute of Science, Israel. He is an active
researcher with numerous contributions to complexity
theory, cryptography and related areas in the theory of
computation. He has published a number of books in these
domains, including Foundations of Cryptography, Vol 1:
Basic Tools (2001) and Vol 2: Basic Applications (2004),
Cambridge University Press.
[ 1] In contrast, in other disciplines, solving a problem
may require gathering information that is not available in
the problem’s statement. This information may either be
available from auxiliary (past) records or be obtained by
conducting ne w experiments.