ing community as they saw how the more rapidly
evolving CMOS micro would overtake bipolar-based
minicomputers, mainframes, and supercomputers if
they could be harnessed to operate as a single system
and operate on a single program or workload.
In the Innovator’s Dilemma, Christensen describes
the death aspect basis of Bell’s Law by contrasting two
kinds of technologies [ 4]. Sustaining technology provides increasing performance, enabling improved
products at the same price as previous models using
slowly evolving technology; disruptive, rapidly evolving technology provides lower-priced products that are
non-competitive with higher-priced sustaining class to
create a unique market space. Over time, the performance of lesser-performing, faster-evolving products
eventually overtakes the established, slowly evolving
classes served by sustaining technology.
From the mid-1980s until 2000, over 40 companies
were established and went out of business attempting to
exploit the rapidly evolving CMOS microprocessors by
interconnecting them in various ways. Cray, HP, IBM,
SGI, and Sun Microsystems remain in 2008 to exploit
massive parallelism through running a single program
on a large number of computing nodes.
Two potentially disruptive technologies for new
• The evolving SFF devices such as cell phones are
likely to have the greatest impact on personal computing, effectively creating a class. For perhaps most
of the four billion non-PC users, a SFF device
becomes their personal computer and communicator, wallet, or map, since the most common and
often only use of PCs is for email and Web browsing—both stateless applications.
• The One Laptop Per Child project aimed at a $100
PC (actual cost $188 circa November 2007) is possibly disruptive as a “minimal” PC platform with
just a factor-of-two cost reduction. This is achieved
by substituting 1G of flash memory for rotating-disk-based storage, having a reduced screen size, a
small main memory, and built-in mesh networking
to reduce infrastructure cost, relying on the Internet
for storage. An initial selling price of $188 for the
OLPC XO- 1 model—approximately half the price
of the least-expensive PCs in 2008—is characteristic
of a new sub-class. OLPC will be an interesting
development since Microsoft’s Vista requires almost
an order of magnitude more system resources.
The Challenge of Constant Price, 10– 100 billion
Transistors per Chip, for General-Purpose Com-
puting. The future is not at all clear how such large,
leading-edge chips will be used in general-purpose
computers. The resilient and creative supercomputing and large-scale service center communities will
exploit the largest multiple-core, multithreaded
chips. There seems to be no upper bound these systems can utilize. However, without high-volume
manufacturing, the virtuous cycle is stopped—in
order to get the cost and benefit for clusters, a high-volume personal computer market must drive
demand to reduce cost. In 2007, the degree of parallelism for personal computing in current desktop systems such as Linux and Vista is nil, which either
indicates the impossibility of the task or the inadequacy of our creativity.
Several approaches for very large transistor count
(approximately 10 billion transistor chips) could be:
• System with primary memory on a chip for
reduced substantially lower-priced systems and
• Graphics processing, currently handled by specialized chips, is perhaps the only well-defined application that is clearly able to exploit or absorb
unlimited parallelism in a scalable fashion for the
most expensive PCs (such as for gaming and
• Multiple-core and multithreaded processor evolution for large systems;
• FPGAs that are programmed using inherently parallel hardware design languages like parallel C or
Verilog that could provide universality that we
have not previously seen; and
• Interconnected computers treated as software
objects, requiring new application architectures.
Independent of how the chips are programmed,
the biggest question is whether the high-volume PC
market can exploit anything other than the first path
in the preceding list. Consider the Carver Mead 11-
year rule—the time from discovery and demonstration until use. Perhaps the introduction of a few
transactional memory systems has started the clock
using a programming methodology that claims to be
more easily understood. A simpler methodology that
can yield reliable designs by more programmers is
essential in order to utilize these multiprocessor chips.
Will SFF Devices Impact Personal Computing?
Users are likely to switch classes when the performance and functionality of a lesser-priced class is able
to satisfy their needs and still increase functionality.
Since the majority of PC use is for communication
and Web access, evolving a SFF device as a single
communicator for voice, email, and Web access is
quite natural. Two things will happen to accelerate
the development of the class: people who have never