there is a constant that you have recognized could replace the use of the variable. You then know, say, which way
a branch is going to go. You’ve built
up this infrastructure of analysis and
you’re ready to make the transformation—but then the results of the analysis are obsolete, so you have to start
again. That was a problem we struggled with early on: How do you avoid
redoing the analysis? It got particularly
bad with interprocedural activities.
Is there some simple insight or over-
arching idea that helps you to avoid
having to completely redo the compu-
Vivek Sarkar was one of the key people on that, but Dave Kuck—this is at
the core of KAI’s work, too. That group
described it as “the oracle.” You assign
costs to each of the instructions, and
you can do it in a hierarchical form, so
this block gets this cost, and this block
has that cost, and then do a cost analysis. This is the time it’s going to take.
Then there’s the overhead cost of having the parallelism.
earlier, you said that Kuck was very
open about everything he was doing,
with one exception—
The oracle! “What have you got in
that thing?” [laughs] “We’re not going
to tell you!” So we built our own variant
of it, which was a very powerful technique.
what else should we mention?
We talked about the NSA work that
wasn’t published. That was, for me, a
mind-changer that led to my feeling
very strongly about domain-specific
are you for them or against them?
oh, okay. Let’s be clear! [laughs]
I’m going to try something very fool-
ish: summarize your career in one
paragraph, then ask you to critique it.
A major focus of your career has been
that, rather than inventing new pro-
gramming languages or language fea-
tures and trying to get people to program
in them, you focused on taking programs
as they are written, or as programmers
like to write them, and making them run
really effectively and efficiently on target
machines. One of the many ways you do
this, but a very important one, is to do
many kinds of sophisticated analysis
and optimization of the code and to find
out as much as you can about the char-
acteristics of the program without actu-
ally running it. So these tend to be static
techniques, and very sophisticated ones.
While you have worked with and pio-
neered quite a number of them, some of
the most interesting involve using graphs
as a representation medium for the pro-
gram and using a strategy of propagat-
ing information around the graph. Be-
cause a program can be represented as
a graph in more than one way, there’s
more than one way in which to propa-
gate that information. In some of these
algorithms in particular, the informa-
tion that’s being propagated around the
graph is in the form of sets—for example,
sets of variable names. As a strategy for
making some of these algorithms effi-
cient enough to use, you’ve represented
sets as bit vectors and decomposed the
graphs using interval analysis in order
to provide an effective order in which to
process the nodes. In doing this, you have
built a substantial sequence of working
systems; these aren’t just paper designs.
You build a great system, and then you
go on and build the next one, and so
on. These all actually work on code and
take real programs that aren’t artificial
benchmarks and make them run.
Now, three goofy questions. what’s
your favorite language to compile?
FORTRAN, of course!
what’s your favorite language to pro-
I guess it would have to be FORTRAN.
okay, now, if you had to build a com-
piler that would run on a parallel ma-
chine, what language would you use to
write that compiler?
Probably something like SETL or a
functional language. And I’m very intrigued about ZPL. I really liked that
any advice for the future?
Yes, I do have one thing. Students
aren’t joining our field, computer science, and I don’t know why. It’s just
such an amazing field, and it’s changed
the world, and we’re just at the beginning of the change. We have to find a
way to get our excitement out to be
more publicly visible. It is exciting—in
the 50 years that I’ve been involved, the
change has been astounding.
Buchholz, W., Ed.
Planning a Computer System: Project
Stretch. McGraw-hill, 1962; http://ed-
Allen, F.E. and Cocke, J.
A catalogue of optimizing tranformations.
In R. Rustin, Ed., Design and Optimization of
Compilers. Prentice-hall, 1972, 1–30.
Interprocedural data flow analysis. In
Proceedings of Information Processing
74. IFIP. Elsevier/north-holland, 1974,
Allen, F.E. and Cocke, J.
A program data flow analysis procedure.
Commun. ACM 19, 3 (Mar. 1976), 137–147;
Allen, F.E. et al.
The Experimental Compiling System. IBM
J. Res. Dev. 24, 6 (nov. 1980), 695–715.
The history of language processor
technology at IBM. IBM J. Res. Dev. 25, 5
(Sept. 1981), 535–548.
Allen, F.E. et al.
An overview of the PTRAn analysis system
for multiprocessing. In Proceedings
of the 1st International Conference on
Supercomputing (Athens, Greece, 1988),
Springer-Verlag, 194–211. Also in J. Par.
Dist. Comp. 5 (Academic Press, 1988),
The Stretch harvest compiler. Computer
history Museum, nov. 8, 2000. Video, TRT
01:12: 17; http://www.computerhistory.org/
The IBM ACS System: A Pioneering
Supercomputer Project of the 1960s.
Speakers: Russ Robelen, Bill Moone, John
Zasio, Fran Allen, Lynn Conway, Brian
Randell. Computer history Museum, Feb.
18, 2010; Video, TRT 1:33: 35; http://www.