in both downlinks and uplinks in all
broadband technologies. Since Netalyzr
tops out at 20Mbps and bounds the test
length at five seconds, the situation is
clearly worse than shown.
Focusing only on cable customers,
the same study showed the equipment
had two dominant buffer sizes: 128KB
and 256KB (for reference, a 3Mbps
uplink would take 340ms to empty
a 128KB buffer; and a 1Mbps uplink
would take about one second). The
Netalyzr authors note the difficulty of
sizing buffers for the wide range of operational access rates, both from different service levels and from dynamically
varying rates. Case closed.
where There’s smoke, There’s
usually fire. Observation of 8-second
latency at my home router sparked
installation of OpenWrt (
www.open-wrt.org) for further investigation. I
set the router-transmit queue to zero
but saw no effect on latency. The WiFi
link from my laptop was of poor quality (resulting in a bandwidth of around
1Mbps), so the bottleneck link was my
WiFi link—and since my test was an
upload, the bottleneck was in my laptop rather than in my router! I finally
realized that AQM is not just for routers; outbound bottlenecks could easily
be at the host’s queue, and WiFi is now
frequently the bottleneck link.
Manipulating the Linux transmit
queue on my laptop reduced latency
about 80%; clearly, additional buff-
ering was occurring somewhere.
“Smart” network interface chips today
usually support large (on the order of
256 packets) ring buffers that have
been adjusted to maximize through-
put over high-bandwidth paths on all
operating systems. At the lowest Wi-Fi
rate of 1Mbps, this can add three sec-
onds of delay. Device-driver ring buf-
fers need careful management, as do
all other buffers in operating systems.
A single packet of 1,500 bytes is 12ms
of latency at 1Mbps; you can see the
amount of buffering must adjust dy-
namically very quickly over two orders
of magnitude so as not to sacrifice
bandwidth or latency.
Broadband and wireless bufferbloat
are also the root causes of most of the
poor Internet performance seen at
many hotels and conferences.
Though the edge is more easily measured, there are some reports
of congestion in the core. The RED
manifesto has usually been ignored,
so there are “dark” buffers hidden all
over the Internet.
figure 5. Plot reproduced from iCsi’s netalyzr studies.
Inferred Buffer Capacity
the Road to hell is Paved
with Good intentions