were actually needed. Then people
could have switched to 64/32-bit operating systems and stopped upgrading
32-bit-only systems, allowing a smooth
transition. Vendors naturally varied
in their timing, but shipments ranged
from “just barely in time” to “rather
late.” This is somewhat odd, considering the long, well-known histories of insufficient address bits, combined with
the clear predictability of Moore’s Law.
All too often, customers were unable to
use memory they could easily afford.
Some design decisions are easy to
change, but others create long-term
legacies. Among those illustrated here
˲ Some unfortunate decisions may
be driven by real constraints (1970:
PDP- 11 16-bit).
˲ Reasonable-at-the-time decisions
turn out in 20-year retrospect to have
been suboptimal (1976–1977: usage of
C data types). Some better usage recommendations could have saved a great
deal of toil and trouble later.
˲ Some decisions yield short-term
benefits but incur long-term problems
(1964: S/360 24-bit addresses).
˲Predictable trends are ignored,
or transition efforts underestimated
(1990s: 32 -> 64/32).
Constraints. Hardware people needed to build 64/32-bit CPUs at the right
time—neither too early (extra cost, no
market), nor too late (competition, angry customers). Existing 32-bit binaries
needed to run on upward-compatible
64/32-bit systems, and they could be expected to coexist forever, because many
would never need to be 64 bits. Hence,
32 bits could not be a temporary compatibility feature to be quickly discarded in later chips.
Software designers needed to agree
on whole sets of standards; build dual-
mode operating systems, compilers,
and libraries; and modify application
source code to work in both 32- and 64-
bit environments. Numerous details
had to be handled correctly to avoid
redundant hardware efforts and maintain software sanity.
Solutions. Although not without
subtle problems, the hardware was
generally straightforward, and not
that expensive—the first commercial
64-bit micro’s 64-bit data path added
at most 5% to the chip area, and this
fraction dropped rapidly in later chips.
Most chips used the same general approach of widening 32-bit registers to
64 bits. Software solutions were much
more complex, involving arguments
about 64/32-bit C, the nature of existing software, competition/cooperation
among vendors, official standards, and
influential but totally unofficial ad hoc
Legacies. The IBM S/360 is 40 years
old and still supports a 24-bit legacy addressing mode. The 64/32 solutions are
at most 15 years old, but will be with us,
effectively, forever. In 5,000 years, will
some software maintainer still be muttering, “Why were they so dumb?”
We managed to survive the Y2K
problem—with a lot of work. We’re still
working through 64/32. Do we have any
other problems like that? Are 64-bit
CPUs enough to help the “Unix 2038”
problem, or do we need to be working
harder on that? Will we run out of 64-bit
systems, and what will we do then? Will
IPv6 be implemented widely enough
All of these are examples of long-lived problems for which modest foresight may save later toil and trouble.
But software is like politics: Sometimes
we wait until a problem is really painful
before we fix it.
Problem: cPu must address
Any CPU can efficiently address some
amount of virtual memory, done most
conveniently by flat addressing, in
which all or most of the bits in an integer register form a virtual memory
address that may be more or less than
actual physical memory. Whenever affordable physical memory exceeds the
easily addressable, it stops being easy
to throw memory at performance problems, and programming complexity
rises quickly. Sometimes, segmented
memory schemes have been used with
varying degrees of success and programming pain. History is filled with
awkward extensions that added a few
bits to extend product life a few years,
usually at the cost of hard work by oper-ating-system people.
Moore’s Law has increased affordable memory for decades. Disks have
grown even more rapidly, especially
since 1990. Larger disk pointers are
more convenient than smaller ones, although less crucial than memory pointers. These interact when mapped files
are used, rapidly consuming virtual address space.
In the mid-1980s, some people started thinking about 64-bit micros—for example, the experimental systems built
by DEC (Digital Equipment Corporation). MIPS Computer Systems decided
by late 1988 that its next design must be
a true 64-bit CPU, and announced the
R4000 in 1991. Many people thought
MIPS was crazy or at least premature. I
thought the system came just barely in
time to develop software to match increasing DRAM, and I wrote an article
to explain why. The issues have not
changed very much since then.
N-bit CPU. By long custom, an N-bit
CPU implements an ISA (instruction
includes long long
16-bit, 16-bit addressing
iBm s/370 family
virtual memory, 24-bit
addresses, but multiple user
address spaces allowed