loss rates. CUBIC’s loss tolerance is a
structural property of the algorithm,
while BBR’s is a configuration parameter. As BBR’s loss rate approaches the
ProbeBW peak gain, the probability of
measuring a delivery rate of the true
BtlBw drops sharply, causing the max
filter to underestimate.
Figure 10 shows BBR vs. CUBIC
goodput for 60-second flows on a
100Mbps/100ms link with 0.001 to
50% random loss. CUBIC’s throughput
decreases by 10 times at 0.1% loss and
totally stalls above 1%. The maximum
possible throughput is the link rate
times fraction delivered (= 1 – lossRate).
BBR meets this limit up to a 5% loss
and is close up to 15%.
You Tube Edge
BBR is being deployed on Google.com
and YouTube video servers. Google
is running small-scale experiments
in which a small percentage of users
are randomly assigned either BBR or
CUBIC. Playbacks using BBR show
significant improvement in all of
YouTube’s quality-of-experience metrics, possibly because BBR’s behavior
is more consistent and predictable.
BBR only slightly improves connection throughput because YouTube
already adapts the server’s streaming
rate to well below BtlBw to minimize
bufferbloat and rebuffer events. Even
so, BBR reduces median RTT by 53%
on average globally and by more than
80% in the developing world. Figure
11 shows BBR vs. CUBIC median RTT
improvement from more than 200
million YouTube playback connections measured on five continents
over a week.
More than half of the world’s seven
billion mobile Internet subscriptions
connect via 8kbps to 114kbps 2.5G systems, 5 which suffer well-documented
problems because of loss-based congestion control’s buffer-filling pro-pensities. 3 The bottleneck link for
these systems is usually between the
SGSN (serving GPRS support node) 18
and mobile device. SGSN software
runs on a standard PC platform with
ample memory, so there are frequently megabytes of buffer between the
Internet and mobile device. Figure 12
compares (emulated) SGSN Internet-to-mobile delay for BBR and CUBIC.
The horizontal lines mark one of the
more serious consequences: TCP
adapts to long RTT delay except on
the connection initiation SYN pack-
et, which has an OS-dependent fixed
timeout. When the mobile device is
receiving bulk data (for example, from
automatic app updates) via a large-
buffered SGSN, the device cannot con-
nect to anything on the Internet until
the queue empties (the SYN ACK ac-
cept packet is delayed for longer than
the fixed SYN timeout).
Figure 12 shows steady-state median RTT variation with link buffer
size on a 128Kbps/40ms link with eight
BBR (green) or CUBIC (red) flows. BBR
keeps the queue near its minimum, independent of both bottleneck buffer
size and number of active flows. CUBIC
flows always fill the buffer, so the delay
grows linearly with buffer size.
Figure 10. BBR vs. CUBIC goodput under loss.
0.001 0.01 0.1 1 2 5 10 20 30 50
Loss Rate (%) – Log Scale
Figure 11. BBR vs. CUBIC median RTT improvement.
0 21 345678910
CUBIC RT T (sec.)
Figure 12. Steady-state median RTT variation with link buffer size
750 150 1500
new connections fail in Linux / Android
new connections fail in Windows / Mac OS / iOS
3000 6000 9750