mouse clicks, and position of scroll bar
to derive the adaptation policies. Similarly, FPS (first-person shooter) games
have limited upload bandwidth to send
frequent updates to all game players,
especially in epic fights with a large
number of players concentrated in one
area. Games can reduce the bandwidth
requirements using criteria such as
proximity, recency, and aim, because
players are most likely interested in
nearby players or those with whom
they have recently interacted.
Paceline is implemented as a user-level library and is layered above
standard TCP implementations. As
depicted in Figure 4, Paceline’s architecture consists of two layers: stream
and channel. The stream layer
manages the application-message queue
in Paceline and ensures low latency
for data with more influence over
quality by enabling adaptation between messages in one stream and
across streams. This layer consists of
two subsystems: the message framing
and fragmentation; and the stream
fairness. The channel layer handles
the TCP low-level sender-side delays.
It consists of the latency controller
and the connection manager.
latency. Paceline allows application-level messages of arbitrary size. To
decouple transmission delay of potentially large application messages from
lower-level queuing delays, the data-transfer mechanism supports sender-side fragmentation of application
messages into Paceline chunks, and
receiver-side reassembly of chunks
back into the original application messages. Chunks are bounded to a small
size, typically a fraction of TCP’s maximum segment size.
Paceline includes application-level
message queues. Unlike lower-level
queues that operate in FIFO order,
Paceline’s message queues are based
on priority, so that chunks of newly arrived important messages may quickly
preempt older less-important ones.
Therefore, chunks of messages with
high importance are released to the
network faster and observe minimal
queuing inside Paceline, as well as
minimum application-level transmission delay. Cancellation allows the application to abort a low-importance
message if its overall transmission delay is too large.
framing and fragmentation:
Which chunk to send?
Fragmentation is the first of several
techniques that improve transport
from Which stream?
Paceline messages (and chunks) are
part of a full-duplex stream. Each
stream in Paceline has a separate priority queue with chunks ready to be sent.
figure 4. Paceline architecture.
Framing and Fragmentation
For fair and timely communication
across concurrent streams, Paceline
supports two notions of fairness: quality and resource fairness. Resource fairness guarantees fair bandwidth across
streams, while quality fairness ensures
fair application-level quality. Quality is
specified in generic terms but derived
from the application level; examples
are the frames per second in videoconferencing or the updates per second in
While FIFO or round-robin policies
are simple ways of multiplexing data
of different streams over the underlying channel, timeliness necessitates
a better notion of fairness among
concurrent streams, especially when
bandwidth is limited. Paceline implemented a fair sharing policy among
active streams that has data inspired by
weighted fair queuing. Each stream has
a cumulative virtual time, an increasing counter quantifying the resources
a stream (messages) has used since it
was created. Active streams are organized in order of their virtual time, and
chunks are sent from the stream with
the minimum virtual time. The important factor regulating how streams are
multiplexed is how their virtual times
are initialized and adjusted.
Virtual time initialization is based
on two rules:
˲ Rule 1 (Fair Start). When a stream
is created its virtual time is set to the
minimum virtual time of all active
streams to ensure that existing streams
do not starve until the new stream
catches up in virtual time. If no active
streams exist, the virtual time of the
newly entered stream is set to the maximum time of all idle streams, or zero if
this is the only stream.
˲ Rule 2 (Use It Or Lose It). If a stream
becomes active after being idle, the
stream’s virtual time is set to the maximum of its virtual time and the minimum virtual time of all active streams.
This guarantees that no stream, while
idle, can save its share of resources for
When a stream transmits a message
over the underlying channel, its virtual time is updated according to the
virtual times of messages sent, which
are based on the application fairness
criteria. Conventional resource fairness increments the virtual time based
on the size of each message transmit-