• QUIC vs. TCP. QUIC multiplexes requests over a single connection, so its designers attempted to set
Cubic congestion control parameters so that one
QUIC connection emulates N TCP connections (with
a default of N = 2 in QUIC 34, and N = 1 in QUIC 37).
We found that N had little impact on fairness. As
Figure 3a shows, QUIC is unfair to TCP as predicted,
and consumes approximately twice the bottleneck
bandwidth of TCP even with N = 1. We repeated these
tests using different buffer sizes, including those
used by Carlucci et al., 8 but did not observe any significant effect on fairness. This directly contradicts
their finding that larger buffer sizes allow TCP and
QUIC to fairly share available bandwidth.
• QUIC vs. multiple TCP connections. When competing
with M TCP connections, one QUIC flow should consume N/(M + 1) of the bottleneck bandwidth. However,
as shown in Table 1, QUIC still consumes more than
50% of the bottleneck bandwidth even with two competing TCP flows. Thus, QUIC is not fair to TCP even
assuming two-connection emulation.
To ensure fairness results were not an artifact of our testbed, we repeated these tests against Google servers. The
unfairness results were similar.
We further investigate why QUIC is unfair to TCP by
instrumenting the QUIC source code, and using tcpprobe5
for TCP, to extract the congestion window sizes. Figure 4a
shows the congestion window over time for the two protocols. When competing with TCP, QUIC is able to achieve a
larger congestion window. Taking a closer look at the congestion window changes (Figure 4b), we find that while
both protocols use Cubic congestion control scheme, QUIC
increases its window more aggressively (both in terms of
slope, and in terms of more frequent window size increases).
As a result, QUIC is able to grab available bandwidth faster
than TCP does, leaving TCP unable to acquire its fair share
of the bandwidth.
4. 3. Page load time
This section evaluates QUIC performance compared to
TCP for loading Web pages (i.e., page load time, or PLT)
with different sizes and numbers of objects. Recall from
Section 3 that we measure PLT using information gathered from Chrome, that we run TCP and QUIC experiments back-to-back, and that we conduct experiments in
a variety of emulated network settings. Note that our
servers add all necessary HTTP directives to avoid caching content. We also clear the browser cache and close
all sockets between experiments to prevent “warmed
up” connections from impacting results. However, we
do not clear the state used for QUIC’s 0-RTT connection
establishment. Furthermore, our PLTs do not include
any DNS lookups. This is achieved by extracting resource
loading time details from Chrome and excluding the DNS
In the results that follow, we evaluate whether the
observed performance differences are statistically significant or simply due to noise in the environment. We use the
Welch’s t-test, 6 a two-sample location test which is used to
test the hypothesis that two populations have equal means.
For each scenario, we calculate the p-value according to the
Welch’s t-test. If the p-value is smaller than our threshold
(0.01), then we reject the null hypothesis that the mean performance for TCP and QUIC are identical, implying the difference we observe between the two protocols is statistically
significant. Otherwise the difference we observe is not significant and is likely due to noise.
Desktop environment. We begin with the desktop environment and compare QUIC with TCP performance for
different rates, object sizes, and object counts—without
adding extra delay or loss (RTT = 36ms and loss = 0%).
Figure 5 shows the results as a heatmap, where the color
of each cell corresponds to the percent PLT difference
between QUIC and TCP for a given bandwidth (vertical
dimension) and object size/number (horizontal direction). Red indicates that QUIC is faster (smaller PLT),
blue indicates that TCP is faster, and white indicates
statistically insignificant differences.
Our key findings are that QUIC outperforms TCP in every
scenario except in the case of large numbers of small objects.
QUIC’s performance gain for smaller object sizes is mainly
due to QUIC’s 0-RTT connection establishment—
substantially reducing delays related to secure connection establishment that corresponds to a substantial portion of total
transfer time in these cases.
0 20 40 60 80 100 Thr
Figure 3. Timeline showing unfairness between QUIC and TCP when
transferring data over the same 5Mbps bottleneck link (RTT = 36ms,
buffer = 30KB).
0 10 20 30 40 50 60 70 80 90 100
(a) QUIC vs. TCP
20 21 22 23 24 25
(b) 5-second zoom of above figure
Figure 4. Timeline showing congestion window sizes for QUIC and
TCP when transferring data over the same 5Mbps bottleneck link
(RTT = 36ms, buffer = 30KB).