Early SSDs had performance inconsistency, particularly under excessive
stress, because garbage collection
either ran out of blocks or used chan-nel/die bandwidth. SSDs today overprovision the physical NAND flash to
ensure there are enough free blocks to
prevent performance penalties from
garbage collection. Most consumer
SSDs overprovision less than 5% extra
NAND flash, whereas enterprise-class
SSDs overprovision up to 50% for per-formance-critical applications. SSD
benchmarks now take into account the
impact of garbage collection and require preconditioning the FTL before
taking performance measurements.
FTL. The FTL uses a number of methods to optimize performance despite
the physical challenges of NAND flash:
˲ The firmware tracks the number of
times each NAND block has been programmed and erased, and it spreads
writes evenly across all NAND blocks in
the SSD, increasing SSD longevity.
˲ A certain number of NAND blocks
in a die are defective from the die-man-ufacturing process. Also, blocks can
go bad during SSD operation. The FTL
must track these bad blocks and substitute good blocks.
˲ NAND blocks will be marked for
garbage collection based on the age of
the data in them to avoid data-reten-tion issues.
˲The FTL optimizes throughput
and die usage. One common method is
statically to interleave allocation units
across multiple channels to ensure the
best possible throughput, as well as the
fullest writes of individual blocks.
Because HDDs have been assumed
to be primary storage media, today’s
software and hardware are engineered
to optimize the performance of these
˲ File systems and applications use
complicated heuristics to move the
mechanical disk head the minimum
distance possible, improving reads
˲ Adjacent requests are merged into
a single larger request in a process
called I/O coalescing, in turn building up the large sequential writes that
HDDs handle best.
˲ There is also an assumption that
HDDs will use linear logical address-
ing where the beginning of the ad-
dress range is the outside diameter of
the disk platter (the fastest part of the
drive), and the end of the address range
is the inner diameter (the slowest part
of the drive).
In the future, NAND flash memory will
inevitably reach physical limitations.
NAND dies are continually shrinking
to lower costs, creating endurance and
reliability issues that cannot be compensated for by the SSD controller or
firmware. Newer memory technologies
still in their infancy, such as phase-change memory (PCM) and resistive
RAM (ReRAM), show great promise in
moving beyond such limitations. They
do so in part by shedding the erase-before-programming requirement and
the asymmetrical access requirement
of the NAND flash used in SSDs.
In turn, this progression invariably will continue the evolutionary/
revolutionary paradigm seen today
in the transition from rotating media to solid-state devices. These new
forms of media will no doubt borrow
from, and build upon, the techniques
implemented in NAND-based SSDs.
At the same time, the shift to these
newer technologies will also inevitably require moving beyond the techniques developed today to deal with
the unique challenges of NAND. New
programming models and interfaces
will need to be built to take full advantage of new forms of storage media that
offer the speed of DRAM coupled with
the data retention of flash.
Mark Moshayedi, Patrick Wilkison
Flash Disk Opportunity
for Server Applications
Jim Gray, Bob Fitzgerald
Flash Storage Today
Michael Cornwell is director of technology and strategy
for Pure Storage. He was previously Sun microsystems’
lead technologist for flash memory and led the creation
of the Sun Storage F5100 Flash Array. Prior to Sun, he
served as manager of storage engineering for the iPod
division of Apple where he was instrumental in the
adoption of NANd flash in Apple products, and worked at
quantum Corporation as a storage architect.