ADUs are dropped; and when the spatial quality nears minimum, then further reductions in bit rate will cause
dropping of base ADUs, which will result in dropping entire frames (lower
temporal quality). Therefore, in the
congested settings used for testing,
the temporal quality (that is, frame
rate) is the quality measure.
Figure 5 shows the average frame
rate as the latency threshold is varied. Notice that on the rightmost side
of the graph with the highest latency
thresholds (tens of seconds), all transports achieve full temporal quality of
the video (30fps). The temporal quality when using TCP drops much more
rapidly moving leftward (with lower
latency thresholds). Even though TCP
delivers high throughput, the high
transport latency with TCP causes
frequent head-of-line delays blocking
between low-importance ADUs (
spatial enhancements) and high-impor-tance ADUs (base layers). This translates to dropped frames and a much
lower fps rate.
The trends exhibited by SST and
Paceline are very similar. Recall that
SST’s implementation completely
avoids transport queuing delays.
Comparing temporal qualities of
Paceline and SST, we see that Paceline also eliminates most TCP sender-side queuing delays. The knees of
the Paceline and SST curves in the
100ms–200ms zone indicate that even
in this heavily congested network, it
is possible for an application such as
videoconferencing to keep within the
zone of reasonable interactivity with a
modest impact on quality. On the other hand, using TCP as the transport
results in quality not increasing substantially until well over the 500ms
point, which is probably not acceptable for comfortable interaction.
Importance effects on latency. Up
to this point, Paceline was shown to
be within the zone of responsiveness
similar to clean-slate protocols such
as SST. This section investigates the
effects of importance on message latency. Messages are spread into buckets according to their importance,
and the one-way end-to-end latency of
the delivered messages is measured
in each bucket. Figure 6a presents the
median latency, while Figure 6b is the
99.9th percentile latency.
As shown in Figure 6, both TCP (with
adaptation) and Paceline have lower
median and worst-case latency for important data, with an improvement of
more than a factor of two over less-important data. Since TCP commits messages in the kernel send buffer, TCP
flows have higher overall latency with a
median that is well above the expected
latency (275ms). Paceline, on the other
hand, keeps the median latency very
close to the one-way delay (75ms) for
more important data. Paceline also has
consistent 99.9th percentile latency
due to failover, which is close to 400ms
for all messages. The 99.9th percentile
latency in TCP is above a second for the
majority of messages, reaching almost
1. 8 seconds in some cases.
We evaluated quality fairness across
streams in Paceline. Video quality is
defined by the temporal quality (fps).
Figure 7 plots the frame rate of three
videos over time. The videos (
transferred over three streams) display with
identical quality (in terms of frame
rates) that changes based on network
conditions. It is interesting to note
that streams were allocated different
bandwidth shares in the same period
to achieve equal quality.
Quality fairness in the Paceline
model is completely controlled by ap-plication-quality metrics. We provide
applications with the notion of importance to control adaptation within
streams. Weights and virtual time, on
the other hand, specify importance
across stream boundaries.
Limitations and future Work
The Paceline implementation is written in C, and we have been mindful of
performance and efficiency from the
start. Using QStream, a complete end-to-end implementation of adaptive
video streaming, was helpful because
the application provided visual and
quantitative feedback directly connected to each performance change.
Later Paceline was used in research on
massive scale gaming, which revealed
performance weaknesses not apparent
in the video setting. Prominent among
these, certain elements of game traffic (state updates) involve very high
volumes of small messages, and keeping processing overhead down in this
setting is a challenge, particularly in
terms of taxing dynamic memory allo-cators. We have a design to reduce the
Paceline memory allocation to at most
one per application-level message.
Current transport protocols such
as SPDY embrace all-SSL-all-the-time
methodology, partly motivated by security but also motivated to mitigate
myriad problems caused by middle
boxes that are intolerant (intentionally
and not) to new protocols. In hindsight,
it would have been useful to think of
SSL integration from the early stages
of Paceline’s design. Paceline can perform SSL negotiation at the channel
level, amortizing the cost of the initial
negotiation. We also need to ensure
that encryption happens only when
messages are written to the socket in order to avoid canceling encrypted data.
figure 7. Quality fairness policy: temporal quality
Stream1 Stream2 Stream3
30
temporal Quality (frames Per second)
25
20
15
10
5
0
60
70
time (s)
80 90
Stream 2
100
110
120
Stream 1