using the Dimes Project data sets that describes the structure of the Internet,
Chris harrison of Carnegie mellon university created this visualization illustrating how
cities across the globe are interconnected (by router configuration and not physical
backbone). In total, there are 89,344 connections.
mance in video distribution, it is natural to consider a P2P (peer-to-peer)
architecture. P2P can be thought of
as taking the distributed architecture
to its logical extreme, theoretically
providing nearly infinite scalability.
Moreover, P2P offers attractive economics under current network pricing
structures.
In reality, however, P2P faces some
serious limitations, most notably because the total download capacity of a
P2P network is throttled by its total uplink capacity. Unfortunately, for consumer broadband connections, uplink
speeds tend to be much lower than
downlink speeds: Comcast’s standard
high-speed Internet package, for example, offers 6Mbps for download
but only 384Kbps for upload (
one-six-teenth of download throughput).
This means that in situations such
as live streaming where the number of
uploaders (peers sharing content) is
limited by the number of downloaders (peers requesting content), average
download throughput is equivalent
to the average uplink throughput and
thus cannot support even mediocre
Web-quality streams. Similarly, P2P
fails in “flash crowd” scenarios where
there is a sudden, sharp increase in demand, and the number of downloaders greatly outstrips the capacity of uploaders in the network.
Somewhat better results can be
achieved with a hybrid approach, leveraging P2P as an extension of a distributed delivery network. In particular,
P2P can help reduce overall distribution costs in certain situations. Because the capacity of the P2P network
is limited, however, the architecture
of the non-P2P portion of the network
still governs overall performance and
scalability.
Each of these four network architectures has its trade-offs, but ultimately,
for delivering rich media to a global
Web audience, a highly distributed architecture provides the only robust solution for delivering commercial-grade
performance, reliability, and scale.
application acceleration
Historically, content-delivery solutions
have focused on the offloading and delivery of static content, and thus far we
have focused our conversation on the
same. As Web sites become increasingly dynamic, personalized, and applica-tion-driven, however, the ability to accelerate uncacheable content becomes
equally critical to delivering a strong
end-user experience.
Ajax, Flash, and other RIA (rich Internet application) technologies work
to enhance Web application responsiveness on the browser side, but ultimately, these types of applications
all still require significant numbers of
round-trips back to the origin server.
This makes them highly susceptible
to all the bottlenecks I’ve mentioned
before: peering-point congestion, network latency, poor routing, and Internet outages.
Speeding up these round-trips is a
complex problem, but many optimizations are made possible by using a
highly distributed infrastructure.
Optimization 1: Reduce transport-layer overhead. Architected for reliability over efficiency, protocols such as
TCP have substantial overhead. They
require multiple round-trips (between
the two communicating parties) to set
48 CommunICatIons of the aCm | feBRuaRY 2009 | vol. 52 | No. 2