guing for thinness. All aspects of this
characterization seem ripe for further
formalization and refinement.
Thanks to my long-time collaborator
Terry Moore for his encouragement
and philosophical dialog, to my comrade Tim Griffin for his critical support, and to Jerry Saltzer, Joe Touch,
Rick McGeer, Bob Harper, Glenn Ri-cart, Elaine Wenderholm and the reviewers for their insightful comments.
This work was performed under
financial assistance award 70NAN-
B17H174 from the U.S. Department of
Commerce, National Institute of Standards and Technology.
1. Akhshabi, S. and Dovrolis, C. The evolution of layered
protocol stacks leads to an hourglass-shaped
architecture. In Proceedings of the ACM SIGCOMM
2011 Conference. ACM, New York, NY, 206–217.
2. Beck, M., Moore, T., Luszczek, P. and Danalis, A.
Interoperable convergence of storage, networking,
and computation. Advances in Information and
Communication. Lecture Notes in Networks and
Systems 70. Springer, 2020, 667–690.
3. Cardelli. L. A semantics of multiple inheritance.
Information and Computation. Springer-Verlag,
4. Chankhunthod, A., Danzig, P. B., Needaels, C., Schwartz,
M.F. and Worrell, K.J. A hierarchical Internet object
cache. In Proceedings of the 1996 USENIX Technical
5. Clark, D.D. Interoperation, open interfaces and
protocol architecture. The Unpredictable Certainty:
White Paper. The National Academies Press,
Washington, DC, 1997, 133–144.
6. Fagg, G., Moore, T., Beck, M., Wolski, R., Bassi, A., Plank,
J.S., and Swany, M. The Internet backplane protocol: A
study in resource sharing. In Proceedings of the 2nd IEEE/
ACM Intern. Symp. Cluster Computing and the Grid.
7. Foster, I., Kesselman, C. and Tuecke, S. The anatomy
of the grid: Enabling scalable virtual organizations.
The Intern. J. High Performance Computing
Applications 15, 3 (2001), 200–222.
8. Peterson, L.L. and Davie, B.S. Computer Networks,
A Systems Approach, 5th Edition. Morgan Kaufmann
Publishers Inc., San Francisco, CA, USA, 2011.
9. Ritchie, D.M. and Thompson, K. The Unix time-sharing
system. Commun. ACM 17, 7 (July 1974), 365–375.
10. Saltzer, J. H., Reed, D.P. and Clark, D. D. End-to-end
arguments in system design. ACM Trans. Comput.
Syst. 2, 4 (Nov. 1984), 277–288.
11. Shilton, K., Burke, J., Zhang, L. and Claffy, K. Anticipating
policy and social implications of named data networking.
Commun. ACM 59, 12 (Dec. 2016), 92–101.
Micah Beck ( email@example.com) is an associate professor
in the Department of Electrical Engineering and Computer
Science at University of Tennessee, Knoxville, and is
currently with the Office of Advanced Cyberinfrastructure
in the National Science Foundation. The work discussed
here was completed prior to his government service and
does not reflect the views, conclusions, or opinions of the
National Science Foundation or of the U.S. Government.
Copyright held by author/owner.
Publication rights licensed to ACM. $15.00.
HTTP responses are based on a collection of stored objects they exhibit
stability over time and consistency between clients. Temporal stability is the
basis of caching implemented in Web
clients and additional consistency between different clients enabled shared
Web caching. 4 However, this stability is
not perfect and in particular does not
hold for dynamic HTTP responses that
are the result of arbitrary server side
computation. This can result in the return of stale cache responses.
By using the HTTP Cache-Control
header directives in an HTTP response,
the server can declare the extent of temporal stability, stability across clients,
or the complete lack of stability in that
response. If servers respect the stability
guarantees declared in Cache-Control
directives, Web caches can use them to
ensure correctness of their responses.
Viewed as a service specification,
HTTP with a requirement for accuracy
in Cache-Control directives is logically stronger because it enables accurate assertions to be made regarding the
correspondence between such metadata and server responses. In terms of
the Hourglass Theorem, the weakness
of the less constrained interpretation of
HTTP without accurate caching metadata allows for looser implementations.
This is traded off against the ability to
support applications that require consistency in HTTP responses.
In practice, the ease and cost savings of ignoring consistency of lifetime
metadata in server content management has generally won out over the
ability to support applications requiring consistency. While Web browsers do
take advantage of temporal consistency, they also sometimes return stale responses and require end users to intervene manually. The popularity of shared
HTTP caches has been hampered by
their inability to ensure consistency.
The inefficiency of uncached HTTP in
delivering stable responses has largely
been countervailed by the trend toward
increasing bandwidth in the Internet,
although it is a significant factor inhibiting the deployment scalability of the
Internet in parts of the world where network bandwidth is highly constrained.
Designing a spanning layer for node
services. Network architects have long
sought to define an interface to enable
interoperation in the creation of new
services using the generalized local
transfer, storage and processing ser-
vices of network intermediate nodes.
Examples of such efforts include active
networking, middleboxes, the computa-
tional grid, PlanetLab, and GENI, as well
as current efforts at defining containers
for computational workloads. A full sur-
vey is beyond the scope of this article.
Nodes that comprise such general
networks are variously characterized
as virtual machines or programmable
routers. A standard interface to local
node services would act as a spanning
layer defining a community of interop-
erability in service creation. Many cur-
rent proposals for such a standard de-
fine spanning layers that are logically
strong, for instance, allowing for the
guaranteed reservation of resources.
The Hourglass Theorem can be
the basis for an argument that such a
spanning layer should be chosen so
as to be minimally sufficient for a set
of necessary applications in order to
maximize the number of possible supports. 2 If we accept the DST as a more
general design rule, then simplicity, generality, and resource limitation
should also be maximized.
A review of current proposals may
reveal an acceptance of strong assumptions, complexity, specialization,
and unbounded resource allocation as
“necessary.” If so, the DST suggests such
designs may suffer diminished deployment scalability, which can be detrimental in any standard so vital to the future of
global information infrastructure.
This article is intended as a first step
in a research program to devise a common language for analyzing the design
of spanning layers in layered systems of
all kinds and predicting the outcomes
of such designs. The primary technical contribution is the formulation of
a layered system of service interfaces
in terms of program logic. This yields a
definition of “deployment scalability”
that seeks to capture the intent of the
The further discussion of other aspects of the thin waist is intended to capture some of the informal arguments
that have been made about the design
of the spanning layer. The Deployment
Scalability Tradeoff is a general design
principle intended to fulfill a role in ar-
Watch the author discuss
this work in the exclusive