tion of the RISC microinstructions.
Any ideas RISC designers were using
for performance—separate instruction and data caches, second-level
caches on chip, deep pipelines, and
fetching and executing several instructions simultaneously—could
then be incorporated into the x86.
AMD and Intel shipped roughly 350
million x86 microprocessors annually
at the peak of the PC era in 2011. The
high volumes and low margins of the
PC industry also meant lower prices
than RISC computers.
Given the hundreds of millions
of PCs sold worldwide each year, PC
software became a giant market.
Whereas software providers for the
Unix marketplace would offer different software versions for the different commercial RISC ISAs—Alpha,
HP-PA, MIPS, Power, and SPARC—the
PC market enjoyed a single ISA, so
software developers shipped “shrink
wrap” software that was binary compatible with only the x86 ISA. A much
larger software base, similar performance, and lower prices led the x86
to dominate both desktop computers
and small-server markets by 2000.
Apple helped launch the post-PC
era with the iPhone in 2007. Instead of
buying microprocessors, smartphone
companies built their own systems
on a chip (SoC) using designs from
other companies, including RISC
processors from ARM. Mobile-device
designers valued die area and energy
efficiency as much as performance,
disadvantaging CISC ISAs. Moreover,
arrival of the Internet of Things vastly
increased both the number of processors and the required trade-offs in die
size, power, cost, and performance.
This trend increased the importance
of design time and cost, further disadvantaging CISC processors. In today’s post-PC era, x86 shipments have
fallen almost 10% per year since the
peak in 2011, while chips with RISC
processors have skyrocketed to 20 billion. Today, 99% of 32-bit and 64-bit
processors are RISC.
Concluding this historical review,
we can say the marketplace settled the
Titantic passenger ship. The market-
place again eventually ran out of pa-
tience, leading to a 64-bit version of
the x86 as the successor to the 32-bit
x86, and not Itanium.
The good news is VLIW still matches
narrower applications with small programs and simpler branches and omit
caches, including digital-signal processing.
RISC vs. CISC in the
PC and Post-PC Eras
AMD and Intel used 500-person de-
sign teams and superior semicon-
ductor technology to close the per-
formance gap between x86 and RISC.
Again inspired by the performance
advantages of pipelining simple vs.
complex instructions, the instruction
decoder translated the complex x86
instructions into internal RISC-like
microinstructions on the fly. AMD
and Intel then pipelined the execu-
operations—two data transfers, two in-
teger operations, and two floating point
operations—and compiler technology
could efficiently assign operations into
the six instruction slots, the hardware
could be made simpler. Like the RISC
approach, VLIW and EPIC shifted work
from the hardware to the compiler.
Working together, Intel and Hewlett
Packard designed a 64-bit processor based
on EPIC ideas to replace the 32-bit x86.
High expectations were set for the first
EPIC processor, called Itanium by Intel and Hewlett Packard, but the reality did not match its developers’ early
claims. Although the EPIC approach
worked well for highly structured
floating-point programs, it struggled
to achieve high performance for integer programs that had less predictable cache misses or less-predictable
branches. As Donald Knuth later
noted: 21 “The Itanium approach ...
was supposed to be so terrific—
until it turned out that the wished-for
compilers were basically impossible
Figure 3. Transistors per chip and power per mm2.
2000 2002 2004 2006 2008 2010 2012 2014 2016
Technology (nm) Power/nm2
Figure 2. Transistors per chip of Intel microprocessors vs. Moore’s Law.
Moore’s Law vs. Intel Microprocessor Density
Moore’s Law (1975 version)