OLTP working set known today. You
don’t need hundreds of thousands of
dollars of enterprise “who-ha” if a few
thousand dollars will do it.
With companies like Teradata and
Netezza you have to ask if doing all
these things to reorganize the data for
DSS (Decision Support Systems) is even
mache cReeGeR: For the poor IT managers out in Des Moines struggling
to get more out of their existing IT infrastructure, you’re saying that they
should really look at existing vendors
that supply flash caches?
steVe KLeiman: No. I actually think
that flash caches are a temporary solution. If you think about the problem,
caches are great with disks because
there is a benefit to aggregation. If I
have a lot of disks on the network, I can
get a better level of performance than
I could from my own single disk dedicated to me because I have more arms
working for me.
With DRAM-based caches, I get a
benefit to aggregation because DRAM
is so expensive it’s hard to dedicate it
to any single node. Neither of these is
true of network-based flash caches.
You can only get a fraction of performance of flash by sticking it out over
the network. I think flash migrates to
both sides, to the host and to the storage system. It doesn’t exist by itself in
mache cReeGeR: Are there products or
architectures that people can take advantage of?
steVe KLeiman: Sure. I think for the
next few years, cache will be an important thing. It’s an easy way to do things.
Put some SSDs (Solid State Disks) into
some of the caching products, or arrays,
that people have and it’s easy. There’ll
be a lot of people consuming SSDs. I’m
just talking about the long term.
mache cReeGeR: This increases performance overall, but what about the
other issue: power consumption?
steVe KLeiman: I’m a power consumption skeptic. People do all these architectures to power things down, but the
lowest-power disk is the one you don’t
own. Better you should get things into
their most compressed form. What
we’ve seen is that if you can remove all
the copies that are out in the storage
system and make it only one instance,
you can eliminate a lot of storage that
you would otherwise have to power.
When there are hundreds of copies of
the same set of executables, that’s a lot
maRGo seLtzeR: You’re absolutely
right, getting rid of duplication helps
reduce power. But that’s not inconsis-
tent; it’s a different kind of power man-
agement. If you look at the cost of stor-
age it’s not just the initial cost, but also
the long-term cost, such as manage-
ment and power. Power is a huge frac-
tion, and de-duplication is one way to
cut that down. Any kind of lower-power
device, of which flash memory is one
example, is going to be increasingly
more attractive to people as power be-
comes increasingly more expensive.
steVe KLeiman: I agree. Flash can
handle a lot of the very expensive,
high-power workloads—the heavy
random I/Os. But I am working on the
assumption that disks still exist. On
a dollar-per-gigabyte basis, there’s at
least a 5-to- 1 ratio between flash and
disks, long term.
maRGo seLtzeR: If it costs five times
more to buy a flash disk than a spin-
ning disk, how long do I have to use a
flash disk before I’ve made up that 5X
cost in power savings over spinning
steVe KLeiman: It’s a fair point. Flash
consumes very little power when you
are not accessing it. Given the way electricity costs are rising, the cost of power
and cooling over a five-year life for even
a “fat” drive can approach the raw cost
of the drive. That’s still not 5X. The disk
folks are working on lower-power operating and idle modes that can cut the
power by half or more without adding
more than a few seconds latency to access. So that improves things to only
50% over the raw cost of the drive.
Look at tape-based migration systems. The penalty for making a bad decision is really bad, because you have
to go find a tape, stick it in the drive,
and wait a minute or two. Spinning
up a disk or set of disks is almost the
same since it can take longer than 30
seconds. Generally those tape systems
were successful where it was expected
behavior that the time to first data access might be a minute. Obviously, the
classic example is backup and restore,
and that’s where we see spin-down
mostly used today.
If you want to apply these ideas to
general-purpose, so-called “
unstructured” data, where it’s difficult to let
people know that accessing this particular data set might have a significant delay, it’s hard to get good results.
By the time the required disks have all
spun up, the person who tried to access
an old project file or follow a search hit
is on the phone to IT. With the lower-power operating modes, the time to
first access is reasonable and the power
savings is significant. By the way, much
of the growth in data over the past few
years has been in unstructured data.
eRiK RieDeL: That’s where the key solutions are going to come from. Look at
what the EPA is doing with their recent
proposals for Energy Star in the data
center. They address a whole series of
areas where you need to think about
power. They have a section about the
power management features you have
in your device. The way that it’s likely to
be written is you can get an Energy Star
label if you do two of the following five
things, choosing between things like
de-duplication, thin provisioning, or
But if you look at the core part of the
spec, there’s a section where they’re
focused on idle power. Idle power is
where we have a big problem in storage. The CPU folks can idle the CPU.
If there is nothing to do then it goes
idle. The problem is storage systems
still have to store the data and be responsive when a data request comes
in. That means time-to-data and time-to-ready are important. In those cases
people really do need to know about
their data. The best idle power for storage systems is to turn the whole thing
off, but that doesn’t give people access
to their data.
We’ve never been really careful because we haven’t had to be. You could
just keep spending the watts and
throwing in more equipment. When
you start asking “What data am I actually using and how am I using it?” you
have to do prediction.
steVe KLeiman: My point is that there
is so much low-hanging fruit with de-duplication, compression, and lower-power operating modes before you
have to turn the disk off that we can
spend the next four or five years just
doing that and save much more energy
than spinning it down will do.
eRiK RieDeL: We are going to have to