is diverted from its intended destination, usually ending up in a black hole
from which there is no return. Since
that can be achieved by accident, who
can say what might be achievable with
malice aforethought?
The Border Gateway Protocol has
been with us for a long time—the
latest version (version 4) is nearly 18
years old. So why have these issues not
been addressed? First, what is remarkable about BGP is not that it is imperfect. Rather, given the exponential
growth of the Internet since BGP was
designed, what is truly remarkable is
that it works at all. Second, it turns out
that the validity of routing information is difficult to establish. Third, a
more secure BGP may be either more
brittle or less flexible, or both. This
would make it more vulnerable or less
able to adapt in a crisis, and hence not
necessarily an improvement.
Making BGP more secure is difficult and the extra equipment and
running costs are significant. The most
recent effort, BGPSEC, is a work in progress in the IETF.a Assuming BGPSEC
is adopted, the IETF working group
envisages a five- to 10-year timescale
for deployment, and is rather depending on Moore’s Law reducing the cost
of the required extra memory capacity
and processing power in the meantime. Even so, as currently defined
BGPSEC does not address the issue of
route leaks.
The insecurity of BGP leads to failure at the network layer. But securing
the Internet is not solely a technology
issue: the operational layer detects and
responds quickly to route leaks and
the like. The China Telecom incident
that occurred in April 2010, in which
approximately 15% of Internet addresses were disrupted for perhaps 18
minutes,b is not only an example of the
insecurity of the technology but also a
testament to the effectiveness of the
operational layer.
The continuing insecurity of BGP
can be seen as a failure of the commercial and economic layer. The cost
a The Internet Engineering Task Force—see
http://www.ietf.org for more about the IETF
in general, and http://datatracker.ietf.org/wg/
sidr/charter/ for more about the BGPSEC initiative in particular.
to each network, in equipment and
in effort, of a more secure form of
BGP would be high. What is worse,
the benefit to each network would be
limited until or unless others did the
same. A more secure Internet would
be a common good, but the incentive
to create or maintain that common
good is missing. On the other hand,
given how effectively the operational
layer compensates for the insecurity
of BGP, perhaps the commercial/eco-nomic system has not failed. Perhaps
the invisible hand has guided us to
a cost-effective balance between security and the ability to deal with the
occasional failure. Intervention to
increase security would be a failure
of the policy layer if, in the name of
safety, unnecessary costs were forced
on the system.
The great strength of the Internet is
that it harnesses the independent efforts of tens of thousands of separate
organizations. They cannot all connect
directly to each other, so the Internet
is a vast mesh of connections over and
above the component networks, moderated by BGP. Each network monitors
and manages its own small part of the
system, but nobody has oversight over
the entire mesh—there is no NOC for
the Internet as a whole! A key function
of a NOC is to monitor network performance; but for the Internet there
is little data on how well it performs
normally, let alone how well it actually
copes with failure.
Another key function of a NOC is to
ensure there is spare capacity in the
network, available for use when things
go wrong; but for the Internet we do
A more secure
internet would be
a common good,
but the incentive
to create or maintain
that common good
is missing.
not know how much spare capacity
there is, or where it is—so we cannot
even guess whether there is enough capacity in the right places, should something unprecedented happen.
Nobody would advocate a central
committee for the Internet. However,
the network’s reliability is of vital interest, so it is remarkable how little we
know about how well it works. Hard information is difficult to obtain. Much
is anecdotal and incomplete, while
some is speculative or simply apocryphal. The report The Internet Under Crisis Conditions: Learning from September
11,”
2 is an exception and a model of
clarity; the authors warn: “…While the
committee is confident in its assessment
that the events of September 11 had little
effect on the Internet as a whole…the
precision with which analysts can measure the impact of such events is limited
by a lack of relevant data.”
effects of internet Vulnerabilities
What is the realistic, likely impact of
Internet vulnerabilities? What can be
done to cost-effectively mitigate the
risks? Is the longevity of these issues
a symptom of a market failure, and if
so, should government act to secure
this critical Infrastructure? These are
all good questions but, sadly, we do not
have good answers and we are hampered by a lack of good data.
Monitoring Internet performance
could provide the relevant data. Unfortunately, it would be difficult to implement such monitoring: the sheer
scale of the Internet is an obvious
problem, and then each layer has its
own issues. First, there are many technical problems: what data should be
collected, where and how to collect it,
how to store it in a usable form, how
to process it to extract useful information, and so on. Some of these are engineering issues; others are research
topics. Second, such a system would
be a common good, with the usual issues of incentives to create and maintain—or, more bluntly, who would
pay for it? Third, Internet networks
compete with each other and some
of the data would be deemed commercially sensitive—so, there could
be some self-interested resistance to
creating the common good. Fourth,
there is a fine line between monitoring and control, and a fine line be-