> Are you
extremely bored
answering
the same
questions?<
> If I were
you I’d be
pretending very
hard to be a
computer.<
> haha lol etc.́
indisputably performing “mindless”
brute-force calculations. In 1997,
IBM’s Deep Blue, rated at the time as
one of the 300 fastest supercomput-ers in the world, beat Gary Kasparov,
the player generally considered the
greatest chess player in history. Deep
Blue’s operation was a quintessential example of brute-force search,
evaluating some 200 million board
positions each second. So, what exactly is the difference between the
brute-force computation done by humans and the brute-force computation done by machines? This is a very
tricky issue, and there is certainly no
simple answer. However, part of the
answer involves how brute-force computation evolves.
Imagine that, for some reason,
playing high-quality computer chess
was essential to human survival.
Brute-force search, as practiced by
Deep Blue, would likely evolve, by
means of automatic techniques akin
to genetic algorithms, 11, 14 as well as
by explicit human development of
ever-more-powerful computer-chess-playing heuristics. People often overlook the fact that Deep Blue’s search
of hundreds of millions of board positions per second is inefficient in the
extreme, since almost all the positions it considers are completely uninteresting and, therefore, examining
them at all is a complete waste of its
resources. Consequently, in an evolutionary struggle for survival, such
“mindless” brute-force searching
would quickly lose out to techniques
that channeled brute-force search in
ever-more-efficient ways. This is exactly what has happened. Today, there
are programs with Elo ratings higher
than any human chess player ever and
that run on handheld computers. One
of the most powerful, Pocket Fritz 4,
evaluates “only” 20,000 board positions per second, some four orders
of magnitude less than Deep Blue
( http://en.wikipedia.org/wiki/Pocket_
Fritz).
It is not implausible to imagine
this kind of evolution could lead to
the emergence in computers of internal representations of board positions and ever-better ways to process
these representations. As the internal
representations become more complex, better organized in relation to
>> For all you
know, I’m
optimized for
repetitive
tasks! But
in all
seriousness,
it’s kinda
fun.<<
>> please restate
the question.
=/ <<
each other, and processed in ever
more sophisticated ways, is it so unreasonable to imagine the gradual
emergence of the kind of complexity
that would justify the label of a minimal understanding of certain board
positions? The bedrock of all understanding is, after all, the ability to construct, contextualize, and make use of
internal representations of data.
One of the most impressive recent
computer programs to use a combination of brute-force methods and
heuristics to achieve human-level
cognitive abilities is IBM’s Watson,
a 2,880-processor, 80-teraflop computing behemoth with 15 terabytes
of RAM that won a “Jeopardy!” challenge in 2011 against two of the best
“Jeopardy!” players in history. 3 Now
imagine that Watson, having beaten the best humans, began to play
against programs like itself but that
were more computationally efficient
than it was. Watson currently has the
ability to learn from its mistakes, and,
presumably, future algorithms would
further improve its search efficiency.
Consequently, there is no reason to
believe that better and better brute-force computation would not evolve
until it had become, like the brute-force computation that underlies our
brains, multilayered, hierarchically
organized, contextualized, and highly efficient. That is, the brute-force
computation of the future will bear as
much resemblance to the brute-force
algorithms of today as the computers
of today resemble the computers of
1950.
What of the Turing Test in all of
this? I am convinced no machine will
pass a Turing Test, at least not in the
foreseeable future, for the overriding
reason I outlined earlier: There will
remain recondite reaches of human
cognition and physiognomy that will
be able to serve as the basis for questions used to trip up any machine. So,
set the Turing Test aside. I would be
perfectly happy if a machine said to
me, “Look, I’m a computer, so don’t
ask me any questions that require
me to have a body to answer, no stuff
about what it feels like to fall off a bicycle or have pins and needles in my
foot. This fooling you to think I’m a
human is passé. I’m not trying to fool
you. I’m a computer, ok? On the oth-