can have as much impact as contention
on a shared host, or more. Consider
the case where a rogue VM overloads
shared storage: hundreds or thousands
of VMs will be slowed down.
Functionality that allows isolating and managing contention when it
comes to networking and storage elements is only now reaching maturity
and entering the mainstream virtualization scene. Designing a virtualization technology stack that can take
advantage of such features requires
engineering work and a good amount
of networking and storage expertise
on behalf of the enterprise customer.
Some do that, combining exotic network adapters that provide the right
cocktail of I/O virtualization in hardware with custom rack, storage, and
network designs. Some opt for the
riskier but easier route of doing nothing special, hoping that system administrators will cope with any contention
issues as they arise.
GUIs. Graphical user interfaces
work well when managing an email
inbox, data folder, or even the desktop
of a personal computer. In general, it
is well understood in the human-computer interaction research community
that GUIs work well for handling a relatively small number of elements. If that
number gets large, GUIs can overload
the user, which often results in poor
decision making. 7 Agents and automation have been proposed as solutions
to reduce information overload. 6
Virtualization solutions tend to
come with GUI-based management
frameworks. That works well for managing 100 VMs, but it breaks down in
an enterprise with 100,000 VMs. What
is really needed is more intelligence
and automation; if the storage of a virtualized server is disconnected, automatically reconnecting it is a lot more
effective than displaying a little yellow
triangle with an exclamation mark in
a GUI that contains thousands of elements. What is also needed is interoperability with enterprise backbones
and other systems, as mentioned previously.
In addition, administrators who are
accustomed to the piecemeal systems
management of the previrtualization
era—managing a server here and a
storage element there—will discover
they will have to adapt. Virtualiza-
tion brings unprecedented integra-
tion and hard dependencies among
components—a storage outage could
mean that thousands of users cannot
use their desktops. Enterprises need
to ensure that their operational teams
across all silos are comfortable with
managing a massively interconnected
large-scale system, rather than a col-
lection of individual and independent
components, without GUIs.
Virtualization holds promise as a solution for many challenging problems. It
can help reduce infrastructure costs,
delay data-center build-outs, improve
our ability to respond to fast-moving
business needs, allow a massive-scale
infrastructure to be managed in a more
flexible and automated way, and even
help reduce carbon emissions. Expectations are running high.
Can virtualization deliver? It absolutely can, but not out of the box. For
virtualization to deliver on its promise,
both vendors and enterprises need to
adapt in a number of ways. Vendors
must place strategic emphasis on enterprise requirements for scale, ensuring that their products can gracefully handle managing hundreds of
thousands or even millions of VMs.
Public cloud service providers do this
very successfully. Standardization,
automation, and integration are key;
eye-pleasing GUIs are less important.
Solutions that help manage resource
contention end to end, rather than only
on the shared hosts themselves, will
significantly simplify the adoption of
virtualization. In addition, the industry’s ecosystem needs to consider the
fundamental redesign of components
that perform suboptimally with virtualization, and it must provide better ways
to collect, aggregate, and interpret logs
and performance data from disparate
Enterprises that decide to virtual-
ize strategically and at a large scale
need to be prepared for the substantial
engineering investment that will be
required to achieve the desired levels
of scalability, interoperability, and op-
erational uniformity. The alternative
is increased operational complexity
and cost. In addition, enterprises that
are serious about virtualization need a
way to break the old dividing lines, fos-
ter cross-silo collaboration, and instill
an end-to-end mentality in their staff.
Controls to prevent VM sprawl are key,
and new processes and policies for
change management are needed, as
virtualization multiplies the effect of
changes that would previously be of
Many thanks to Mostafa Afifi, Neil Allen, Rob Dunn, Chris Edmonds, Rob-bie Eichberger, Anthony Golia, Allison Gorman Nachtigal, and Martin
Vazquez for their invaluable feedback
and suggestions. I am also grateful to
John Stanik and the ACM Queue Editorial Board for their feedback and guidance in completing this article.
Beyond Server Consolidation
CTO Roundtable: Virtualization
The Cost of Virtualization
1. Bailey, M., Eastwood, M., Gillen, A., Gupta, D.
Server virtualization market forecast and analysis,
2005–2010. IDC, 2006.
2. Brodkin, J. Virtual server sprawl kills cost savings,
experts warn. Network World. Dec. 5, 2008.
3. Goldberg, R.P. Survey of virtual machine research.
IEEE Computer Magazine 7, 6 (1974), 34–45.
4. Humphreys, J. Worldwide virtual machine software
2005 vendor shares. IDC, 2005.
5. IDC. Virtualization market accelerates out of the
recession as users adopt “Virtualize First” mentality;
6. Maes, P. Agents that reduce work and information
overload. Commun. ACM 37, 7 (1994), 30–40.
7. Schwartz, B. The Paradox of Choice. HarperCollins, NY,
Evangelos Kotsovinos is a vice president at Morgan
Stanley, where he leads virtualization and cloud-computing engineering. His areas of interest include
massive-scale provisioning, predictive monitoring,
scalable storage for virtualization, and operational tooling
for efficiently managing a global cloud. He also serves
as the chief strategy officer at Virtual Trip, an ecosystem
of dynamic start-up companies, and is on the Board
of Directors of NewCred Ltd. Previously, Kotsovinos
was a senior research scientist at T-Labs, where he
helped develop a cloud-computing R&D project into a
VC-funded Internet start-up. A pioneer in the field of
cloud computing, he led the XenoServers project, which
produced one of the first cloud-computing blueprints.