aries will also have to be updated to
support HTTP 2.0, which is a much
longer labor- and capital-intensive
HTTP 1.x will be around for at least
another decade, and most servers
and clients will have to support both
1.x and 2.0 standards. As a result, an
HTTP 2.0-capable client must be able
to discover whether the server—and
any and all intermediaries—support
the HTTP 2.0 protocol when initiating a new HTTP session. There are two
cases to consider:
˲ Initiating a new (secure) HTTPS
connection via TLS.
˲Initiating a new (unencrypted)
In the case of a secure HTTPS connection, the new ALPN (Application
Layer Protocol Negotiation9) extension
to the TLS protocol allows users to
negotiate HTTP 2.0 support as part of
the regular TLS handshake: the client
sends the list of protocols it supports
(for example, http/2.0); the server selects one of the advertised protocols
and confirms its choice by sending the
protocol name back to the client as
part of the regular TLS handshake.
Establishing an HTTP 2.0 connection over a regular, nonencrypted
channel requires a bit more work. Because both HTTP 1.0 and HTTP 2.0 run
on the same port ( 80), in the absence
of any other information about the
server’s support for HTTP 2.0, the client will have to use the HTTP Upgrade
mechanism to negotiate the appropriate protocol, as shown in Figure 8.
Using the Upgrade flow, if the server
does not support HTTP 2.0, then it can
immediately respond to the request
with an HTTP 1. 1 response. Alternatively, it can confirm the HTTP 2.0 upgrade by returning the “ 101 Switching
Protocols” response in HTTP 1. 1 format, and then immediately switch to
HTTP 2.0 and return the response using the new binary framing protocol.
In either case, no extra round-trips are
Developing a major revision of a pro-
tocol underlying all Web communica-
tion is a nontrivial task requiring a lot
of careful thought, experimentation,
and coordination. As such, crystal gaz-
ing for HTTP 2.0 timelines is danger-
ous business—it will be ready when it
is ready. Having said that, the HTTP
Working Group is making rapid prog-
ress. Its past and projected milestones
are as follows:
˲November 2009—SPDY protocol
announced by Google.
˲ March 2012—call for proposals for
˲September 2012—first draft of
˲July 2013—first implementation
draft of HTTP 2.0.
˲ April 2014—Working Group last
call for HTTP 2.0.
˲November 2014—submit HTTP
2.0 to IESG (Internet Engineering
Steering Group) as a Proposed Stan-
SPDY was an experimental protocol
developed at Google and announced
in mid-2009, which later formed the
basis of early HTTP 2.0 drafts. Many
revisions and improvements later, as
of late 2013, there is now an implementation draft of the protocol, and
interoperability work is in full swing—
recent Interop events featured client
and server implementations from
Microsoft Open Technologies, Mozil-la, Google, Akamai, and other contributors. In short, all signs indicate
the projected schedule is (for once)
on track: 2014 should be the year for
making the Web (even) faster
With HTTP 2.0 deployed far and wide,
can we kick back and declare victory? The Web will be fast, right? Well,
as with any performance optimization, the moment one bottleneck is
removed, the next one is unlocked.
There is plenty of room for further optimization:
˲ HTTP 2.0 eliminates HOL blocking at the application layer, but it still
exists at the transport (TCP) layer. Further, now that all of the streams can be
multiplexed over a single connection,
tuning congestion control, mitigating
bufferbloat, and all other TCP optimizations become even more critical.
˲ TLS is a critical and largely unop-timized frontier: we need to reduce
the number of handshake round-trips,
upgrade outdated clients to get wider
adoption, and improve client and server performance in general.
˲ HTTP 2.0 opens up a new world of
research opportunities for optimal im-
plementations of header-compression
strategies, prioritization, and flow-con-
trol logic both on the client and server,
as well as the use of server push.
˲ All existing Web applications will
continue to work over HTTP 2.0—the
servers will have to be upgraded, but
otherwise the transport switch is transparent. That is not to say, however, that
existing and new applications cannot
be tuned to perform better over HTTP
2.0 by leveraging new functionality
such as server push, prioritization, and
so on. Web developers will have to develop new best practices, and revert
and unlearn the numerous HTTP 1. 1
workarounds they are using today.
In short, there is a lot more work to
be done. HTTP 2.0 is a significant milestone that will help make the Web faster, but it is not the end of the journey.
Improving Performance on the Internet
how Fast is Your Website?
1. akamai. state of the Internet, 2013; http://www.
2. at&t. average speeds for at&t laptopConnect
devices, 2013; http://www.att.com/esupport/article.
=vttq9Cya2ig and http://developer.att.com/home/
3. belshe, m. more bandwidth doesn’t matter (much),
4. grigorik, I. high-performance networking in google
Chrome, 2013; http://www.igvita.com/posa/high-performance-networking-in-google-chrome/.
5. httP archive; http://www.httparchive.org/.
6. Ietf httPbis working group. Charter, 2012; http://
7. Ietf httPbis working group. httP 2.0 specifications,
8. Ietf httPbis working group. hPaCk-header
Compression for httP/2.0, 2013; http://tools.ietf.org/
9. Ietf network working group. transport layer
security (tls) application layer Protocol negotiation
(alPn) extension, 2013; http://tools.ietf.org/html/
10. upson, l. google I/o 2013 keynote address, 2010;
Ilya Grigorik is a web performance engineer and
developer advocate at google where he works to make
the web faster by building and driving adoption of
performance best practices at google and beyond.
© 2013 aCm 0001-0782/13/12 $15.00