Serverless allows the seller to quickly
grab additional computing resources in
the small increments necessary for each
ticket sale, then just as quickly shut
It is also a good way to handle the
demands of Internet of Things devices, Brenner says. Such devices are usually inexpensive, so they have simple
hardware, which requires minimal
software. Often their job is to take a
sensor reading or capture an image
and upload it to the cloud at intervals
ranging from minutes to hours. Such
short, sporadic activity fits the small,
discrete functions of serverless.
Serverless computing is based on another, older concept: containerization.
Containers are simplified versions of
virtual machines, providing an environment inside which a piece of software
can run. Brenner calls a container “a
sandbox for software” that does not
give the user access to the computer’s
hardware. Launching a virtual machine
means loading in the operating system
and all the libraries, and the process
can take minutes; a container can be
launched in less than a second, and the
code copied into it in less than a second, so it’s up and running quickly.
Containers allow services such as
search and mail to run as quickly as they
do, says Aparna Sinha, product management lead at Google Cloud. Google originally developed a container management system called Borg back in the early
2000s. Later, the company developed
Kubernetes, an open source container
system based on Linux. In 2013 another
company, Docker, created its own open
source container system for general use.
“Every large-scale operator in industry that’s deploying web services,
they’re using some kind of containerization technology,” says Remzi Arpaci-Dusseau, a professor of computer
science who studies distributed systems at the University of Wisconsin in
Madison. “What’s nice about Docker
and the more-general open containers
is they’re more accessible to everybody,
because they’re free and open source.”
Every time a function is triggered,
the system creates a container in which
to run it. While running the function
may take only microseconds, launching the container takes a second or two.
For many applications, that is fast
enough, especially compared to creat-
ing a virtual machine. However, in
some cases, Kanso says, even a second
or two can be too long to wait. For in-
stance, an app dealing with real-time
stock market transactions, which can
take place on the order of milliseconds,
could not use containers. If an app
deals with a series of events, each of
which takes a couple of seconds, latency
will keep increasing. Researchers devel-
oping container technology will have to
figure out how to reduce the launch
time, he says. “It appears to be one of
the next critical questions.”
Arpaci-Dusseau developed Open-
Lambda, a research platform to tack-
le questions about serverless, along
with several colleagues, including his
former Ph.D. student Tyler Harter, now a
software engineer at Microsoft. Getting
containers to launch faster will be a chal-
lenge, they say. “If you really want to get
down to say being able to start containers
in 1 or 2 ms, we’re going to have to make
changes to Linux itself,” Harter says.
Systems using long-running programs take some time to initialize, but
then improve by caching pieces of data
near the processor. It is not clear how
to use caching in serverless, where
small operations run briefly and may
be spread out on different servers, says
Arpaci-Dusseau, but researchers are
trying to figure it out.
Another challenge for serverless,
Hockin says, is that many people want
their apps to work with their databases.
While containers work easily with
stateless workloads, which do not re-
tain data, a database has to maintain
its state over time, which conflicts with
the here-and-gone nature of contain-
ers. Google has developed a method to
capture the requirements of a stateful
workload and turn them into applica-
tion programming interfaces that al-
low users to manage their databases in
a serverless setting.
Serverless is also not optimal for
deep learning applications, or any-
thing that requires large amounts of
data or is designed to run for a long
time, Brenner says. Hockin, though,
believes containers and serverless
computing can be useful for just about
any type of application. “I think there
will be coverage for every major class
of application,” he says. “If there’s
people who want to do things, the
technology will adapt.” If, for instance,
some applications need lower latency
than is currently available, researchers
will figure out how to provide that.
“We may not be there fully yet, but I
think that if there’s people who want
to do it, they will find a way to do it,”
Hockin says. “We’ll make anything
work that they’re interested in.”
Hendrickson, S., Sturdevant, S., Harter, T.,
Venkataramani, V., Arpaci-Dusseau, A.C.,
and Arpaci-Dusseau, R.H.
Serverless Computation with OpenLambda,
Proceedings of the 8th USENIX Conference
on Hot Topics in Cloud Computing, 2016.
Burns, B., Grant, B., Oppenheimer, D.,
Brewer, E., and Wilkes, J.
Borg, Omega, and Kubernetes,
Communications of the ACM, 59, 2016.
Baldini, I., Castro, P., Chang, K., Cheng, P.,
Fink, S., Ishakian, V., Mitchell, N.,
Muthusamy, V., Rabbah, R., Slominski, A.,
and Suter, P.
Serverless Computing: Current Trends
and Open Problems, ArXiv, 2017.
McGrath, G. and Brenner, P.R.
Serverless Computing: Design,
Implementation, and Performance,
IEEE 37th International Conference
on Distributed Computing Systems
Why Serverless Computing?
Neil Savage is a science and technology writer based in
Lowell, MA, USA.
© 2018 ACM 0001-0782/18/2 $15.00
“You have to use our
language, and you
have to use it in a way
that we are okay with;
you have to use the
libraries we offer.
But if you use it
in those bounds,
life is easy.”