the automated matching of subscriptions/interests to
content descriptions. Content is coded as it moves
through the nodes in the network. A snapshot of
“packets” (on an edge or stored in a node) at any given
point in the graph would show they contain a coded
multiplex of multiple sources of data. Hence, there
would be a poor fit throughout the network architecture for packets, flow-level descriptions, normal
capacity assignments, and end-to-end and hop-by-hop protocols. The data is in some sense a shifting
interference pattern that emerges from the mixing and
merging of all sources.
Have we also unintentionally thrown out the
legacy system with the new paradigm? What about
person-to-person voice calls and its 21st century
equivalent, real-time gaming? If we could push the
idea of swarms or waves down into the network architecture, how would the architecture implement cir-cuit-on-a-wave and IP-on-a-wave?
Network architects could do this the same way
(inefficiently) they implement VoIP—through a circuit on IP. One is at liberty to run multiple legacy networks, supporting one-to-one flows using separate
communications systems, especially since the networks are available already. On the other hand, how
would they be supported on the wave? Perhaps
through some minimalist publication-and-subscrip-tion system.
Other ways to understand this design concept are
circulating in the research community. One is the
data-orientated paradigm in which information is
indexed by keys and retrieved by subscription. Protocols are declarative. All nodes are caches of content,
indexes, and buffers. All nodes forward information
while caching, in the style of mobile ad hoc, delay-tol-erant, and peer-to-peer systems; these communication
methods are unified in the data-oriented paradigm.
No network architect interested in developing a
grand unified network architecture would be concerned with micromanaging fine-grain resources. For
a network architect, efficiency is measured at the
global level. Traditional activities may be maddeningly inefficient, but most content—video, audio, and
sensor data—is handled with maximum efficiency.
Content is also handled through multi-path, coded
delivery, with good isolation and protection properties
through the statistics of scaling, not by virtue of local
resource reservation.
So, unlike traditional network architectural goals,
the wave-particle duality model I’ve described here
pursues a different primary goal. In it, the notion of a
wave is optimized for resilience through massive scale,
not for local efficiency. Moreover, it supports group
communication and mobility naturally, since the ren-
dezvous in the network between publish and consumption is dynamic, not through the coordination
of end-points in the classical end-to-end approach.
The details of the wave model are likely to keep
researchers busy for the next 20 years. My aim here is
to get them to think outside the end-to-end communications box in order to solve the related problems, if
they are indeed the right problems, or to propose a
better problem statement to begin with.
One might ask many questions about future wave-particle network architecture, including: What is the
role of intermediate and end-user nodes? How do
they differ? Where would be the best locus for a rendezvous between publication and consumption?
Would each rendezvous depend on the popularity of
content and its distance from the publisher, subscriber, or mid-point. What codes should be used?
How can we build optical hardware to achieve software re-coding? What role might interference in radio
networks play in the wave-particle network model?
How can we achieve privacy for individual users and
their communications in a network that mixes data
packets?
This future wave-particle duality in the Internet-based network would be more resilient to failure,
noise, and attack than the current architecture where
ends and intermediate nodes on a path are sitting
ducks for attacks, whether deliberate or accidental.
How might its architects quantify the performance of
such a system? Do they need a new set of modeling
tools—replacing graph theory and queuing systems—
to describe it? Finally, if network control is indeed a
distributed system, can the idea of peer-to-peer be
used as a control plane?
I encourage you not to take my wave-particle duality analogy too seriously, especially since I am suspicious of any grand unified network model myself. But
I do encourage you to use the idea to disrupt your
own thinking about traditional ideas. In the end, perhaps, we will together discover that many traditional
ideas in networking are fine as is, but all are still worth
checking from this new perspective. c
REFERENCES
1. Anderson, C. The Long Tail: How Endless Choice Is Creating Unlimited
Demand. Random House, New York, 2006.
2. Braden, R., Faber, T., and Handley, M. From protocol stack to protocol
heap: Role-based architecture. ACM SIGCOMM Computer Communication Review 33, 1 (Jan. 2003), 17– 22.
3. Stephenson, N. The Diamond Age: Or, a Young Lady’s Illustrated Primer.
Bantam Spectra Books, New York, 1995.
JON CROWCROFT ( Jon.Crowcroft@cl.cam.ac.uk) is the Marconi
Professor of Communications Systems in the Computer Laboratory at
the University of Cambridge, Cambridge, U.K.