Viewing the Internet in terms of the
hourglass model, adding NATed subnetworks to the implementation is a
weakening of the IP spanning layer. The
global reachability condition that datagrams can be sent from any sending
endpoint to any receiver’s IP address
does not hold in the NATed Internet.
This breaking of symmetry in reachability is often viewed as a weakness of NAT.
In spite of arguments against it,
NAT has become ubiquitous in the consumer Internet. While NAT does solve a
problem with scarcity of IPv4 addresses, there are other ways to allow sharing
of a single IP address by many nodes,
some of which maintain symmetric
reachability. Our analysis suggests the
logical weakness of the NATed Internet’s design may in fact help to explain
its greater deployment scalability.
By abandoning symmetric reachability, the NATed Internet trades-off a
logically weaker spanning layer against
an expanded class of possible supports.
This comes at the expense of excluding
some possible applications that require global reachability (such as pure
peer-to-peer systems). The exclusion of
some applications has been generally
acceptable to the community of commercial Internet users, sometimes using workarounds created by the providers of commercial peer-to-peer services
that require general reachability.
Users of applications that require
symmetric reachability have responded
by working within a separate community of interoperability, sometimes connecting to non-NATed networks such as
those at many universities and research
laboratories using Virtual Private Networks. This bifurcation is made more
acceptable by the fact that most home
and business users do not require global
reachability. In this analysis, the broader
support possible for NAT has overcome
resistance due to violations of layering
and lack of symmetric reachability.
Process creation in Unix. In early
operating systems it was common for
the creation of a new process to be a
privileged operation that could be invoked only from code running with
supervisory privileges. There were multiple reasons for such caution, but one
was that the power to allocate operating system resources that comprise a
new process was seen as too great to
be delegated to the application level.
Another reason was the power of process creation (for example, determining the identity under which the newly
created process would run) was seen as
too dangerous. This led to a design approach in which command line interpretation was a near-immutable function of the operating system that could
only be changed by the installation of
new supervisory code modules, often a
privilege available only to the vendor or
In Unix, process creation was implemented by the fork() system call, a
logically weaker operation that does
not allow any of the attributes of the
child process to be determined by the
parent, but instead requires that the
child inherit such attributes from the
parent. 9 Operations that changed sensitive properties of a process were factored out into orthogonal calls such as
chown() and nice(). These were fully
or partially restricted to operating in
supervisory mode or integrated with
exec() (which is not so restricted) using chmod() and the set-user-ID bit.
The decision was made to allow the
allocation of kernel resources by applications, which allows the possibility of
“fork-bomb” denial of service attacks.
The result of this design was not only
the ability to implement a variety of different command line interpreters as
nonprivileged user processes (leading
to innovations and the introduction
of powerful new language features)
but also the flexible use of fork() as a
tool in the design of multiprocess applications. This design approach has
allowed the adaptation of kernels that
implement the Unix-based POSIX standard to run on mobile and embedded
devices that could not have been anticipated by the original designers.
Caching metadata in HTTP. The
World Wide Web established HTTP as
a near-universal protocol for accessing
persistent data objects using a global
namespace (commonly referred to as
the REST interface). This general use
of HTTP has created a community of
interoperation that has adopted it as a
The original specification of the
HTTP protocol did not include any
requirement of consistency in the
objects returned in response to inde-
pendent but identical HTTP requests.
However, in the common case where
(such as flood and prune), maintaining
the tree and responding to changes in
topology. Algorithms that maintain ac-
curate trees require persistent state at
intermediate nodes, which results in
the spanning layer being strengthened.
Historically, a number of protocols
have been proposed that perform well
in different environments, with particular bifurcation between groups
that are sparse in the subnets they
reach (with a low degree of branching
toward the leaves of the tree) and those
that are dense (with a higher degree of
branching toward the leaves). Because
different candidate protocols perform
better in different scenarios, multiple
implementation approaches have
been maintained by network providers
and selected by applications.
The resulting “fat” multicast spanning layer has limited simplicity and
generality in not offering a single universal solution. The best choice for a
particular situation may be unclear, or
may change over time. This has arguably contributed to the lack of continuous, universally available deployment of
IP multicast throughout the Internet.
Application builders have used overlay
multicast and repeated unicast as workarounds at the cost of redundant traffic.
Internet address translation.
Network Address Translation (NAT) is a
technique for allowing sharing of an IP
address by multiple endpoints within a
subnetwork. NAT uses DHCP to assign
local addresses to endpoints within a
“NATed” subnetwork that cannot in
general be reached by datagrams sent
from outside. The NAT-aware router
then translates local addresses to use
a single externally reachable source IP
address on TCP connections initiated
by clients within the NATed subnet.
UDP protocols can also be supported.
The ability of a router to interpose
itself between end points in a NATed
subnetwork and external servers allows the semantics of TCP connections
initiated from within the subnetwork
to match the specification of the non-NATed network. The most common
cases are connections between a Web
browser or other client within the network and external servers. However,
connections from outside the NATed
subnet to endpoints within it are not
possible without additional administrative intervention.