since changes to a module’s interface requires renegotiating boundaries with all communicating modules.
Long-term persistence provides the
stability necessary for economies of
scale to take hold. It is such economies that have afforded the von Neumann machine a better cost/process-ing power ratio than the numerous
alternative architectures that for decades have failed to supplant it. At
the same time, the high costs of infrastructural investments ensure computing resources are repurposed rather than merely replaced. The resource
stacks must thus compose with different types of materials—serial and
parallel architectures, twisted pair
and fiber, tape and flash storage—and
ensure their backward compatibility.
Infrastructural change thus proceeds
conservatively through mutation and
hybridization, rather than making an
outright break with the past.
Trades-offs all the way down. Modularity is a powerful design strategy:
by decoupling abstraction from implementation, it breaks down complex systems in independent yet coordinated organizational units and
provides for flexibility in coping with
technical change. In doing so, modularity constitutes the primary social
and technical order of the computing
infrastructure. It is thus remarkable
that the costs of modularity are rarely
noted in the literature. The flexibility
modularity brings to abstraction and
implementation is always incurred at
the price of efficiency trade-offs—for
example, the von Neumann architecture and the serial model of programming it inherently favors. Because of
these infrastructural biases, the pax
romana of modularity is always under
threat, under pressure to extract more
computational work from the current
organization of the stacks. This tension becomes particularly apparent as
new types of computational resources
require integration within the infrastructure: the shift from singlecore to
multicore, magnetic to flash media,
wireline to wireless will reverberate
throughout the stacks, as efficiency
trade-offs are renegotiated.
1 Abstraction and implementation are thus
always in tension, a tension that provides a key entry point for analyzing
in abstraction as the
of the field.
Infrastructure manages scarce
resources. Data centers already consume 3% of the world’s total electrical
output, a number that uncomfortably
connects computing cycles to coal extraction.
4 The computing infrastructure is however not merely concerned
with managing the scarcity of electrical power, but also that of processing,
storage, and connectivity. Abstractions
not only relieve programmers from the
burdens of keeping track of the finiteness of resources, but also from how
these are shared among competing
applications. But sharing a resource
also inevitably entails various efficiency trade-offs, favoring some types
of applications over others. Reliance
on networked computing for an ever
broader range of essential services—
from telesurgery to intelligent transportation and national security—will
require providers to devise policies for
prioritizing competing demands on
shared computational resources, with
much more significant implications
than mere jittery music videos.
Slow drift. The computing infra-
structure is a constantly evolving sys-
tem, continuously responding to and
integrating growth in size and traffic,
technical evolution and decay, new
applications, services and implemen-
tations, emergent behaviors, and so
forth. This evolution is constrained by
the dynamics outlined previously—
persistence, efficiency trade-offs, and
the necessity to share scarce resources.
These constraints induce a limited set
of possible evolutionary paths, such as
co-design of layers, encapsulation, in-
sertion of new layers (so-called middle-
ware). Thus, infrastructural develop-
ment never proceeds from some clean
slate, but rather, from the push and
pull of competing stakeholders work-
ing to shift its evolution in the most
advantageous direction. Even the OSI
model, the best example of top-down,
a priori modular decomposition of a
resource stack, was immediately chal-
lenged in the marketplace by the more
widely implemented TCP/IP stack. Al-
ways only partially responsive to ratio-
nal control, infrastructural evolution
is characterized by drift, opportunity,
1. asanovic, k. et al. a view of the parallel computing
landscape. Commun. ACM 52, 10 (oct. 2009), 56–67.
2. blanchette, J-F. a material history of bits. Journal
of the American Society for Information Science and
Technology 62, 6 (June 2011), 1042–1057.
3. Ciborra, C. From Control to Drift: The Dynamics
of Corporate Information Infastructures. oxford
university Press, 2000.
4. Cook, g. and Van horn, J. how dirty is your data?
technical report, greenpeace International,
5. lin, h.s., editor. Report of a Workshop on the Scope
and Nature of Computational Thinking. national
academies Press, Washington, d. C., 2010.
6. Messerschmitt, d.g. Networked Applications: A
Guide to the New Computing Infrastructure. Morgan
kaufmann Publishers, 1999.
7. Pappano, l. the master’s as the new bachelor’s. The
New York Times (July 22, 2011), page ed16.
8. Van schewick, b. Internet Architecture and
Innovation. the MIt Press, 2010.
9. Wing, J. M. Computational thinking. Commun. ACM 49,
3 (Mar. 2006), 33–35.
Jean-François Blanchette ( email@example.com) is
an associate professor in the department of Information
studies at the university of California, los angeles.