if you opened one up and looked at what was actually
in there—processing vertexes in particular, but for some
machines, processing the fragments—it was a programmable engine. It’s just that it was not programmable by
you; it was programmable by me. From an architecture
standpoint, that’s a fairly subtle distinction. What we
weren’t doing was selling application development. It”s
a little like mobile phones now. In general, they’re not
extensible except by a very small set of people, so they
All along, those SGI machines had microcode engines
that were programmable; we just weren’t exposing the
programmability to the world. Frankly, part of the reason
was that we didn’t have control of those components.
We went out to the market and said, “You know, the
Intel 860 is the best floating-point-per-dollar solution this
time, so we’ll put in one of those and build a microcode
engine that runs it.”
Then the next time, we would go out and say, “Mmm,
this TI 40-bit floating-point gizmo is the best one, so we’ll
use that.” We couldn’t promise the same coding environment generation after generation, so we couldn’t reveal
that it was programmable or else our customers would get
very upset. We tried that. It actually does upset customers
when you let them invest in coding and then sell them
another machine that’s faster but doesn’t run their code.
So for a variety of sort of tactical reasons, the programmability wasn’t exposed.
The story is more complicated now because there
is less programmability in some areas. But the general
notion that people woke up eight years ago and said,
“Oh, it makes sense to put programmability in these
things,” is definitely oversimplifying. This architectural
trend has been smoother than that.
TD There was, in fact—and I was here for this—an awful
lot of resistance from the big players in the GPU business
to exposing that programmability.
KA I was part of that.
TD I like to think that the transition happened because
of us in the movie-quality imaging business. We pressed
hard for it and demonstrated that if you were going to
make high-quality images, this was the way you were
going to do it.
PH They always knew you were right; it’s just that it was
too costly for them to consider. The market opportunity
wasn’t there. But the games eventually started getting so
sophisticated that there was no way of making them look
better without exposing programmability to the [John]
Carmacks and [Tim] Sweeneys of the world.
KA Games were a big enough market that you could
afford to do it. That’s the part that’s less obvious. It cost
a huge amount of engineering, and it took a lot of steps
and a lot of years to build this into the marketplace,
which is bigger than movies at this point. It costs a lot
of money to engineer these things, so it wasn’t like you
could just wake up one day and say we ought to do it. It
took all these years to build up the capital expenditure
capability that an Nvidia or an ATI has to actually do it.
If mistakes had been made along the way—big ones—
it wouldn’t have happened. There are lots of examples
of marketplaces where there was custom hardware that
hasn’t beautifully evolved into the space the way graphics
has. I think a lot of that is market opportunity; it’s not
pure technology. Those markets just wouldn’t support it.
TD If you look at the big computing machines in the
world, you see that most of them are devoted to fluid
dynamics and electrostatic simulation, for sort of obvious
PH And n-body calculations. To me, graphics is mostly
about simulation. There are basic computational building
blocks that go into simulation. To the extent that graphics uses a certain set of those in certain ways, a lot of
other people use other sets of those in other ways. Once
you start seeing the building blocks designed for simulation in a fairly general-purpose parallel way, you can say,
“Yeah, it’s not just for graphics; it could be used for other
things.” That’s what other people are starting to find out.
TD We’ve heard that GPU performance increases faster
than Moore’s law. Is that just low-hanging fruit because
of the primitive state of GPU architectures, or is this trend
going to continue? Are those CPU and GPU curves going
KA Moore’s law, just to be clear, has to do with transistor count and is formulated, I think, as an economic law
that the number of transistors on the most economically
produced die size will go up exponentially—and it turned
out around 50 percent a year has been the number. So 1. 5
is the compound average growth. But remember, it isn’t a
performance law; it is a transistor-count law.
TD Sure, but performance is related.
KA Performance is related to both transistor count and
clock speed, and the clock speed mattered a lot. The clock
speed has been going up around 20 percent a year.
PH One way to think of it is sort of as a rate-cubed effect.
You get a square for the area, and if something shrinks
in size by a half, you get four times as many of them, but
the clock also goes up by roughly a factor of two.
KA It’s not purely linear, but if you go back and look at