10 20 30
Amount of pipelining (packets)
Results can be seen in Figure 5.l CCN requires five times
the pipelining of TCP, 20 packets vs. 4, to reach its throughput asymptote. This is an artifact of the additional store-and-forward stages introduced by our prototype’s totally
unoptimized user-level implementation vs. Linux TCP’s
highly optimized in-kernel implementation. TCP throughput asymptotes to 90% of the link bandwidth, reflecting its
header overhead (payload to packet size ratio). CCN asymptotes to 68% of the link bandwidth. Since this test encapsulates CCN in IP/UDP, it has all the overhead of the TCP test
plus an additional 22% for its own headers. Thus for this
example the bulk data transfer efficiency of CCN is comparable to TCP but lower due to its larger header overhead.m
5. 2. Content distribution efficiency
The preceding sections compared CCN vs. TCP performance
when CCN is used as a drop-in replacement for TCP, i.e., for
point-to-point conversations with no data sharing. However,
a major strength of CCN is that it offers automatic, transparent sharing of all data, essentially giving the performance of
an optimally situated web proxy for all content but requiring
no pre-arrangement or configuration.
To measure sharing performance we compared the total
time taken to simultaneously retrieve multiple copies of a
large data file over a network bottleneck using TCP and CCN.
The test configuration is shown in the inset of Figure 6 and
consisted of a source node connected over a 10 Mbps shared
link to a cluster of six sink nodes all interconnected via
1Gbps links.n The machines were of various architectures
(Intel, AMD, PowerPC G5) and operating systems (Mac OS X
10. 5. 8, FreeBSD 7. 2, NetBSD 5.0.1, Linux 2. 6. 27).
The sinks simultaneously pulled a 6MB data file from the
source. For the TCP tests this file was made available via an
http server on the source and retrieved by the sinks using
curl. For the CCN tests this file was pre-staged as described
in Section 5. 1. For each test, the contents of the entire file
were retrieved and we recorded the elapsed time for the last
m Most of the CCN header size increase vs. TCP is due to its security annotation (signature, witness, and key locator).
n We used a 10Mbps bottleneck link to clearly show saturation behavior,
even with only a small number of nodes.
figure 6. total transfer time vs. the number of sinks.
Total download time (s)
Number of clients
node to complete the task. Multiple trials were run for each
test configuration, varying the particular machines which
participated as sinks.
Test results are shown in Figure 6. With a single sink,
TCP’s better header efficiency allows it to complete faster
than CCN. But as the number of sinks increases, TCP’s completion time increases linearly while the CCN performance
stays constant. Note that since the performance penalty of
using CCN vs. TCP is around 20% while the performance
gain from sharing is integer multiples, there is a net performance win from using CCN even when sharing ratios
(hit rates) are low. The win is actually much larger than it
appears from this test because it applies, independently,
at every link in the network and completely alleviates the
traffic concentrations we now see at popular content hubs
and major peering points. For example, today a popular
YouTube video will traverse the link between youtube.com
and its ISP millions of times. If the video were distributed
via CCN it would cross that link once. With the current
architecture, peak traffic loads at aggregation points scale
like the total consumption rate of popular content. With
CCN they scale like the popular content creation rate, a
number that, today, is exponentially lower.
5. 3. Voice-over-CCn and the strategy layer
To demonstrate how CCN can support arbitrary point-to-point protocols we have implemented Voice-over-IP
(VoIP) on top of CCN (VoCCN). Complete details and performance measurements are given in Jacobson et al. 10 In this
section we describe a test that uses a VoCCN call to demonstrate the behavior and advantages of CCN’s strategy layer.
As described in Section 3. 3, when the FIB contains
multiple faces for a content prefix, the strategy layer dynamically chooses the best. It can do this because CCN can send
the same Interest out multiple faces (since there is no
danger of looping) and because a CCN node is guaranteed
to see the Data sent in response to its Interest (unlike IP
where the request and response paths may be almost entirely
disjoint). These two properties allow the strategy layer to run
experiments where an Interest is occasionally sent out
all faces associated with the prefix. If a face responds faster
than the current best, it will become the new best and be used