88 COMMUNICATIONS OF THE ACM | JULY 2019 | VOL. 62 | NO. 7
and absence of root cause analysis for reported observations. We refer the reader to out full paper14 for detailed discussion on these works.
Google-reported QUIC performance. The only large-scale performance results for QUIC in production come
from Google. This is mainly due to the fact that at the time
of writing, Google is the only organization known to have
deployed the protocol in production. Google claims that
QUIC yields a 3% improvement in mean page load time
(PLT) on Google Search when compared to TCP, and that
the slowest 1% of connections load one second faster
when using QUIC. 9 In addition, in a recent paper15 Google
reported that on average, QUIC reduces Google search
latency by 8% and 3.5% for desktop and mobile users
respectively and reduces video rebuffer time by 18% for
desktop and 15.3% for mobile users. Google attributes
these performance gains to QUIC’s lower-latency connection establishment (described below), reduced head-of-line blocking, improved congestion control, and better
In contrast to our work, Google-reported results are
aggregated statistics that do not lend themselves to repeatable tests or root cause analysis. This work takes a complementary approach, using extensive controlled experiments in
emulated and operational networks to evaluate Google’s performance claims (Section 4) and root cause analysis to explain
We now describe our methodology for evaluating QUIC and
comparing it to the combination of HTTP/2, TLS, and TCP.
The tools we developed for this work and the data we collected are publicly available.
3. 1. Testbed
We conduct our evaluation on a testbed that consists of a
device machine running Google’s Chrome browserf
connected to the Internet through a router under our control
(Figure 1). The router runs OpenWRT (Barrier Breaker
14.07, Linux Open Wrt 3. 10. 49) and includes Linux’s Traffic
Control and Network Emulation tools, which we use to
emulate network conditions including available bandwidth, loss, delay, jitter, and packet reordering.
Our clients consist of a desktop (Ubuntu 14.04, 8GB memory, Intel Core i5 3.3GHz) and two mobile devices: a Nexus 6
(Android 6.0.1, 3GB memory, 2.7GHz quad-core) and a
MotoG (Android 4. 4. 4, 1GB memory, 1.2GHz quadcore).
Our servers run on Amazon EC2 (Kernel 4. 4.0-34-generic,
Ubuntu 14.04, 16GB memory, 2.4GHz quad-core) and support
HTTP/2 over TCP (using Cubic and the default linux TCP
stack configuration) via Apache 2. 4 and over QUIC using the
standalone QUIC server provided as part of the Chromium
source code. To ensure comparable results between protocols,
we run our Apache and QUIC servers on the same virtual
machine and use the same machine/device as the client. We
increase the UDP buffer sizes if necessary to ensure there are
no networking bottlenecks caused by the OS. As we discuss in
Section 4. 1, we configure QUIC so it performs identically to
Google’s production QUIC servers.
QUIC uses HTTP/2 and encryption on top of its reliable
transport implementation. To ensure a fair comparison, we
compare QUIC with HTTP/2 over TLS, atop TCP. Throughout
this paper we refer to such measurements that include
HTTP/2+TLS+TCP as “TCP”.
Our servers add all necessary HTTP directives to avoid any
caching of data. We also clear the browser cache and close all
sockets between experiments to prevent “warmed up” connections from impacting results. However, we do not clear
the state used for QUIC’s 0-RTT connection establishment.
3. 2. Experiments and performance metrics
Experiments. Unless otherwise stated, for each evaluation
scenario (network conditions, client, and server) we conduct
at least 10 measurements of each transport protocol (TCP
and QUIC). To mitigate any bias from transient noise, we
run experiments in 10 rounds or more, each consisting of a
download using TCP and one using QUIC, back-to-back. We
present the percent differences in performance between
TCP and QUIC and indicate whether they are statistically
significant. All tests are automated using Python scripts and
Chrome’s debugging tools. We use Android Debug Bridge
for automating tests running on mobile phones.
Application. We test QUIC performance using the Chrome
browser that currently integrates the protocol.
For Chrome, we evaluate QUIC performance using Web
pages consisting of static HTML that references JPG images
(various number and sizes of images) without any other object
dependencies or scripts. While previous work demonstrates
that many factors impact load times and user-perceived per-
formance for typical, popular Web pages, 3, 18, 23 the focus of this
work is only on transport protocol performance. Our choice of
simple pages ensures that PLT measurements reflect only the
efficiency of the transport protocol and not browser-induced
factors such as script loading and execution time.
Furthermore, our simple Web pages are essential for isolat-
ing the impact of parameters such as size and number of
objects on QUIC multiplexing. We leave investigating the
effect of dynamic pages on performance for future work.
Performance metrics. We measure throughput, “page
load time” (i.e., the time to download all objects on a page),
and video quality metrics that include time to start, rebuffering events, and rebuffering time. For Web content, we use
Client’s machine Server Router (running network emulator)
Figure 1. Testbed setup. The server is an EC2 virtual machine
running both QUIC and Apache server. The empirical RTT from client
to server is 12ms and loss is negligible.
f The only browser supporting QUIC at the time of this writing.
g Chrome “races” TCP and QUIC connections for the same server and uses
the one that establishes a connection first. As such, the protocol used may
vary from the intended behavior.