the world’s best talent to look at how
to extend it and build solutions beyond our core competence. This is a
very powerful model.
From an architectural perspective,
I am absolutely passionate about the
fact that virtualization should be open
because then you get this very powerful model of innovation.
I have an ongoing discussion with
one of the major analyst organizations
because virtualization in their brains
is shaped like VMware’s products are
shaped today. They think of it as a
thing called ESX Server. So if VMware’s
ESX Server is viewed as a fully integrated car, then Xen should be viewed
as a single engine. I would assert that
because we don’t know how virtualization is going to be in five years, you
do not want to bind your consumption of virtualization to a particular car
right now. As technology innovation
occurs, virtualization will take different shapes. For example, the storage
industry is innovating rapidly in virtualization and VMware cannot take
advantage of it with their (current)
closed architecture. Xen is open and
can adapt: Xen runs on a 4,096 CPU supercomputer from SGI and it runs on
a PC. That is an engine story; it is not
a car story.
It’s really critical we have an architecture that allows independent innovation around the components of
virtualization. Virtualization is just a
technology for forcing separation as
far down the stack as you can—on the
server, separated by the hypervisor, in
the storage system—and then let’s see
how things build. I’m not in favor of
any architecture which precludes innovation from a huge ecosystem.
steVe heRRoD: I actually agree on
several parts. Especially for the middle
market, the number-one thing that
people need is something easy to use. I
think there’s a reasonable middle road
which can provide a very nice framework or a common way of doing things,
but also have tie in to the partner ecosystem as well. Microsoft has done this
very well for a long time.
steVe BouRne: These bindings may
be ABIs or they may not be, but they
sound like the analogue of the ABIs.
ABIs are a pain in the neck. So are these
bindings a pain in the neck?
simon cRosBY: Bindings are a very hot
area. The hottest one for us right now
is the VM you run on XenServer will
run on Microsoft Hyper-V. This is a virtual hardware interface, where, when
you move a virtual machine from one
product to the other, the VM will still
think it has the same hardware underneath it.
Right now if you take a VM from VMware and try to run on Citrix you will
get a blue screen. It’s just the same as if
you took a hard disk out of a server and
put it in another server and expected
the OS to boot correctly. VMware and
XenSource actually had discussions
on how to design a common standard
hardware ABI, but we couldn’t get other major vendors to play.
If we actually were able to define
an industry standard virtual hardware
ABI, the first guys who’d try to break
it would be Intel and AMD. Neither of
those companies can afford for that
line to be drawn because it would
render all their differentiation meaningless, making their products undifferentiated commodities. Even if you
move everything into the hardware,
the ABIs would still be different.
In the ABI discussion there are two
things that count. There’s “Will the VM
just boot and run?” Then it’s “If the
VM is up and running can I manage it
using anybody’s management tool?”
I think we’re all in the same position
on standards-based management interfaces—DMTF (Distributed Management Task Force) is doing the job.
mache cReeGeR: Let’s take a moment
to summarize.
Server consolidation should not be
a focus of VM deployment. One should
architect their data center around the
strengths of virtualization such as
availability and accessibility to clouds.
An IT architect should keep the
operating application environment
in mind as he makes his VM choices.
Each of the VM vendors has particular
strengths and one should plan deployments around those strengths.
In discussing cloud computing we
said the kind of expertise resident in
large enterprises may not be available to the SMB. Virtualization will
enable SMBs and others to outsource
data center operations rather than requiring investment in large, in-house
facilities. They may be more limited
in the types of application services
available, but things will be a lot more
cost effective with a lot more flexibility than would otherwise be available.
Using their in-house expertise, large
enterprises will build data centers
to excess, and either sell that excess
computing capacity like an independent power generator or not, depending on their own needs for access to
quick capacity.
GustaV: The one point we talked
around, that we all have agreement on,
is that server administrators will have
to learn a lot more about storage and
a lot more about networks than they
were ever required to do before. We are
back to the limiting constraint problem. The limiting constraint used to
be the number of servers you had and
given their configuration, how they
were limited in what they could do.
Now with virtualized servers the limiting constraint has changed.
With cheap gigabit Ethernet
switches a single box only consumes
60 to 100 megabits. Consolidate that
box and three others into a single box
supporting four servers and suddenly
I’m well past the 100 megabit limit. If
I start pushing toward the theoretical
limits with my CPU load, which is 40-
to- 1 or an average of 2%, suddenly I’ve
massively exceeded GigE. There is no
free lunch. Virtualization pushes the
limiting constraint to either the network or to storage; it’s one of those two
things. When we look at places that
screw up virtualization, they generally
over consolidate CPUs, pushing great
demands on network and/or storage.
You shouldn’t tell your management
that your target is 80% CPU utilization.
Your target should be to utilize the box
most effectively. When I have to start
buying really, really, high-end storage
to make this box consolidatable, I have
a really big problem. Set your target
right. Think of it like cycle scavenging,
not achieving maximum utilization.
When you start by saying “I want 100%
CPU utilization,” you start spending
money in storage and networks to get
there that you never needed to spend.
That is a very bad bargain.
Mache Creeger ( mache@creeger.com) is the principal of
emergent technology associates, marketing and business
development consultants.