1. Chrome debugging protocol.
3. I. grigorik. Deciphering the critical
rendering path. https://calendar.
4. QUIC at 10,000 feet. https://docs.
without a formal specification, is rapidly evolving, and is
deployed at scale with nonpublic configuration parameters. To do so, we use a methodology and testbed that
allows us to conduct controlled experiments in a variety
of network conditions, instrument the protocol to reason
about its performance, and ensure that our evaluations
use settings that approximate those deployed in the wild.
We used this approach to evaluate QUIC, and found cases
where it performs well and poorly—both in traditional
desktop and mobile environments. With the help of an
inferred protocol state machine and information about
time spent in each state, we explained the performance
results we observed.
Additionally, we performed a number of other experiments that were ommited from this paper due to space limitations. These included testing QUIC’s performance for
video streaming, tests in operational mobile networks, and
impact of proxying. For more information on these experiments and our findings, we refer the reader to our full paper. 14
Lessons learned. During our evaluation of QUIC, we iden-
tified several key challenges for repeatable, rigorous analy-
ses of application-layer transport protocols in general.
Below we list a number of lessons learned while addressing
• Proper empirical evaluation is easy to get wrong: A suc-
cessful protocol evaluation and analysis requires proper
configuration, calibration, workload isolation, coverage
of a wide array of test environments, rigorous statistical
analysis, and root cause analysis. While this may seem
obvious to the seasoned empiricist, it took us much
effort and many attempts to get them right, so we leave
these lessons as reminders for a general audience.
• Models are essential for explaining performance: Transport
protocol dynamics are complex and difficult to summarize via traditional logging. We found that building an
inferred state machine model and using transitions
between states helped tame this complexity and offer
insight into root causes for protocol performance.
• Plan for change. As the Internet evolves, so too will transport protocols: It is essential to develop evaluation techniques that adapt easily to such changes to provide
consistent and fair comparisons over time.
• Do not forget to look at the big picture: It’s easy to get
caught up in head-to-head comparisons between a flow
from one protocol versus another. However, in the wide
area there may be thousands or more flows competing
over the same bottleneck link. In our limited fairness
study, we found that protocol differences in isolation
are magnified at scale. Thus, it is important to incorporate analysis of interactions between flows when evaluating protocols.
We find that in mobile QUIC spends most of its time (58%)
in the “Application Limited” state, which contrasts substantially with the desktop scenario (only 7% of the time).
The reason for this behavior is that QUIC runs in a user-space process, whereas TCP runs in the kernel. As a result,
QUIC is unable to consume received packets as quickly as
on a desktop, leading to suboptimal performance, particularly when there is ample bandwidth available.k
Table 2 shows the fraction of time (based on server logs)
QUIC spent in each state in both environments for 50Mbps
with no added latency or loss. By revealing the changes in
time spent in each state, such inferred state machines help
diagnose problems and develop a better understanding of
5. CONCLUDING DISCUSSION
In this paper, we address the problem of evaluating an
application-layer transport protocol that was built
Init 0.01% 0.01%
Slow start 1.65% 0.42%
Application limited 7.05% 58.84%
Congestion avoidance 91.28% 40.55%
Tail loss probe 0.00% 0.00%
Recovery 0.00% 0.18%
Table 2. The fraction of time QUIC spent in each state on MotoG vs.
Desktop. QUICv34, 50Mbps, no added loss or delay. The table shows
that poor performance for QUIC on mobile devices can be attributed
to applications not processing packets quickly enough. Note that the
zero probabilities are due to rounding.
(a) Nexus6, No added loss or latency
5KB 10KB 100KB 200KB 500KB 1MB 10MB
(b) Nexus6, 1% Loss
5KB 10KB 100KB 200KB 500KB 1MB 10MB
Figure 10. QUICv34 vs. TCP for varying object sizes on a Nexus6
smartphone (using WiFi). We find that QUIC’s improvements diminish
or disappear entirely when running on mobile devices.