Figure 6. FAWn supports both read- and write-intensive workloads.
Small writes are cheaper than random reads due to the FAWn-DS log
Queries per second
1 FA WN-DS file
0.2 0.4 0.6 0.8
Fraction of put requests
8 FA WN-DS files
(all gets) to 1 (all puts) on a single node (Figure 6).
FAWN-DS can handle more puts per second than gets
because of its log structure. Even though semi-random write
performance across eight files on our CompactFlash devices
is worse than purely sequential writes, it still achieves higher
throughput than pure random reads.
When the put-ratio is low, the query rate is limited by the
get requests. As the ratio of puts to gets increases, the faster
puts significantly increase the aggregate query rate. On the
other hand, a pure write workload that updates a small subset of keys would require frequent cleaning. In our current
environment and implementation, both read and write
rates slow to about 700–1000 queries/s during compaction,
bottlenecked by increased thread switching and system
call overheads of the cleaning thread. Last, because deletes
are effectively 0 byte value puts, delete-heavy workloads are
similar to insert workloads that update a small set of keys
frequently. In the next section, we mostly evaluate read-intensive workloads because it represents the target workloads for which FAWN-KV is designed.
4. 2. FAWn-KV system benchmarks
system throughput: To measure query throughput, we populated the KV cluster with 20GB of values and then measured the maximum rate at which the front end received
query responses for random keys. Figure 7 shows that the
cluster sustained roughly 36,000 256 byte gets per second
( 1,700 per second per node) and 24,000 1KB gets per second
( 1, 100 per second per node). A single node serving a 512MB
datastore over the network could sustain roughly 1,850 256
byte gets per second per node, while Table 2 shows that it
could serve the queries locally at 2,450 256 byte queries per
second per node. Thus, a single node serves roughly 70% of
the sustained rate that a single FAWN-DS could handle with
Figure 7. query throughput on 21-node FAWn-KV system for 1KB and
256 bytes entry sizes.
Queries per second
256 B Get Queries
1 KB Get Queries
20 30 40
Figure 8. Power consumption of 21-node FAWn-KV system for 256 bytes
values during Puts/Gets.
100 150 200 250 300 350
local queries. The primary reasons for the difference are
the addition of network overhead, request marshaling and
unmarshaling, and load imbalance—with random key distribution, some back-end nodes receive more queries than
others, slightly reducing system performance.
system power consumption: Using a WattsUp power
meter that logs power draw each second, we measured
the power consumption of our 21-node FAWN-KV cluster
and two network switches. Figure 8 shows that, when idle,
the cluster uses about 83 W, or 3 W/node and 10 W/switch.
During gets, power consumption increases to 99 W, and
during insertions, power consumption is 91 W. Peak get
performance reaches about 36,000 256 bytes queries/s for
the cluster serving the 20GB dataset, so this system, excluding the front end, provides 364 queries/J.
The front end connects to the back-end nodes through a
1 Gbit/s uplink on the switch, so the cluster requires about
one low-power front end for every 80 nodes—enough front
ends to handle the aggregate query traffic from all the
back ends ( 80 nodes 1500 queries/s/node 1KB/query =
937 Mbit/s). Our prototype front end uses 27 W, which adds
nearly 0.5 W/node amortized over 80 nodes, providing 330
queries/J for the entire system. A high-speed (4ms seek
time, 10 W) magnetic disk by itself provides less than 25
queries/J—two orders of magnitude fewer than our existing
Network switches currently account for 20% of the
power used by the entire system. Moving to FAWN requires
roughly one 8-to- 1 aggregation switch to make a group of
FAWN nodes look like an equivalent-bandwidth server; we
account for this in our evaluation by including the power
of the switch when evaluating FAWN-KV. As designs such
as FAWN reduce the power drawn by servers, the importance of creating scalable, energy-efficient datacenter networks will grow.
5. ALTERnATIVE ARChITECTuRES
When is the FAWN approach likely to beat traditional architectures? We examine this question by comparing the 3
year total cost of ownership (TCO) for six systems: Three
“traditional” servers using magnetic disks, flash SSDs, and
DRAM; and three hypothetical FAWN-like systems using
the same storage technologies. We define the 3 year TCO
as the sum of the capital cost and the 3 year power cost at
10 cents/k Wh.
Because the FAWN systems we have built use several-year-old technology, we study a theoretical 2009 FAWN node