will be legal; there have been regulations in Europe since 2010 that force
carmakers to provide technical information to independent garages
and spare-parts manufacturers. It
is tempting to hope that a free/open
source approach might do some of
the heavy lifting, but many critical
components are proprietary, and
need specialist test equipment for
software development. We also need
incentives for minimalism rather
than toolchain bloat. We do not really
know how to allocate long-term ownership costs between the different
stakeholders so as to get the socially
optimal outcome, and we can expect
some serious policy arguments. But
whoever pays for it, dangerous bugs
have to be fixed.
Once software becomes pervasive
in devices that surround us, that are
online, and that can kill us, the software industry will have to come of
age. As security becomes ever more
about safety rather than just privacy,
we will have sharper policy debates
about surveillance, competition,
and consumer protection. The notion that software engineers are not
responsible for things that go wrong
will be put to rest for good, and we will
have to work out how to develop and
maintain code that will go on working dependably for decades in environments that change and evolve.
1. Abelson, H. et al. Keys under doormats: Mandating
insecurity by requiring government access to all data
and communications. Commun. ACM 58, 10 (Oct.
2. Andrews, E.L. Mercedes-Benz tries to put a persistent
Moore problem to rest. New York Times (Dec. 11,
3. German parents told to destroy Cayla dolls over
hacking fears. BBC News (Feb. 17, 2017).
4. Greenberg, A. Hackers remotely kill a jeep on the
highway—with me in it. Wired (July 21, 2015).
5. Leverett, É., Clayton, R., and Anderson, R.
Standardisation and certification of the Internet
of Things. In Proceedings of WEIS 2017; http://
Ross Anderson ( Ross.Anderson@cl.cam.ac.uk)
is Professor of Security Engineering at Cambridge
University, U.K. He is a Fellow of the Royal Society and
the Royal Academy of Engineering, and author of Security
Engineering—A Guide to Building Dependable Distributed
Copyright held by author.
key that would let government agents
crash their car?
There are opportunities too. Your
monthly upgrade to your car software
will not just fix the latest format string
vulnerability, but safety flaws as well.
The move to self-driving cars will lead
to rapid innovation with real safety
consequences. At present, product recalls cost billions, and manufacturers
fight hard to avoid them; in the future,
software patches will provide a much
cheaper recall mechanism, so we can
remove the causes of many accidents
with software, just as we now fix dangerous road junctions physically.
But cars will still be more difficult
to upgrade than phones. A modern
car has dozens of processors, in everything from engine control and navigation through the entertainment
system to the seats, side mirrors, and
tire-pressure sensors. The manufacturer will have to coordinate and drive
the process of updating subsystems
and liaising with all the different suppliers. Its “lab car”—the rig that lets
test engineers make sure everything
works together—is already complex
and expensive, and the process is
about to get more complex still.
Sustainable Safety and Security
Perhaps the biggest challenge will be
durability. At present most vendors
won’t even patch a three-year-old
phone. Yet the average age of a U.K. car
at scrappage is 14. 8 years, and rising
all the time; cars used to last 100,000
miles in the 1980s but now keep going for nearer 200,000. As the embedded carbon cost of a car is about equal
to that of the fuel it will burn over its
lifetime, a significant reduction in vehicle durability will be unacceptable
on environmental grounds.
As we build more complex artifacts, which last longer and are more
safety critical, the long-term maintenance cost may become the limiting factor. Two things follow. First,
software sustainability will be a big
research challenge for computer scientists. Second, it will also be a major
business opportunity for firms who
can cut the cost.
On the technical side, at present
it is hard to patch even five-year-old
software. The toolchain usually will
not compile on a modern platform,
leaving options such as keeping the
original development environment
of computers and test rigs, but not
connecting it to the Internet. Could
we develop on virtual platforms that
would support multiple versions?
That can be more difficult than it
initially appears. Toolchain upgrades
already break perfectly functional software. A bugbear of security developers
is that new compilers may realize that
the instructions you inserted to make
cryptographic algorithms execute in
constant time, or to zeroise cryptographic keys, do not affect the output.
So they optimize them away, leaving
your code suddenly open to side-chan-nel attacks. (In separate work, Laurent
Simon, David Chisnall, and I have
worked on compiler annotations that
enable a security developer’s intent to
be made explicit.)
Carmakers currently think their liability for upgrades ends five years after the last car is sold. But their legal
obligation to provide spare parts lasts
for 10 years in Europe; and most of
the cars in Africa arrive in the country
secondhand, and are repaired for as
long as possible to keep them operable. Once security patches become
necessary for safety, who is going to
be writing the patches for today’s cars
in Africa in 25 years’ time?
This brings us to the business
side—to the question of who will
pay for it all. Markets will provide
part of the answer; insurance premiums are now rising because low-speed impacts now damage cameras, lidars, and ultrasonic sensors,
so that a damaged side mirror can
cost $1,000 rather than $100. The
firms that earn money from these
components have an incentive to
help maintain the software that
uses them. And part of the answer