consistent read. This, in turn, can be
seen as a tension between write amplification and read perspiration.
Conclusion
I have looked at just a few of the examples where there are trade-offs in our
systems between write and read.
1 It is
endemic in so many environments. We
see emerging systems that adapt and
optimize for these trade-offs as they
watch their usage patterns. Fun stuff!
Related articles
on queue.acm.org
Immutability Changes Everything
Pat Helland
https://queue.acm.org/detail.cfm?id=2884038
Disambiguating Databases
Rick Richardson
https://queue.acm.org/detail.cfm?id=2696453
The Pathologies of Big Data
Adam Jacobs
https://queue.acm.org/detail.cfm?id=1563874
References
1. Athanassoulis, M., Kester, M.S., Maas, L. M., Stoica, R.,
Idreos, S., Ailamaki, A. and Callaghan, M. Designing
access methods: The RUM conjecture. In Proceedings
of the 19th International Conference on Extending
Database Technology (2016).
2. Dayan, N. and Idreos, S. Dostoevsky: better spacetime tradeoffs for LSM-tree-based key-value stores
via adaptive removal of superfluous merging. In
Proceedings of the Intern. Conf. Management of Data
(2018), 505–520.
3. Helland, P. Identity by any other name. Commun.
ACM 62, 4 (Apr. 2019), 80.
4. Helland, P. Normalization is for sissies (July 23, 2007);
http://bit.ly/30iL7g3
5. Luo, C., and Carey, M. J. Forthcoming. LSM-based storage techniques. Computing Surveys;
arXiv:1812.07527.
6. O’Neil, P., Cheng, E., Gawlick, D. and O’Neil, E. The log-structured merge-tree (LSM-tree). Acta Informatica
33, 4 (1996).
Pat Helland has been implementing transaction systems,
databases, application platforms, distributed systems,
fault-tolerant systems, and messaging systems since
1978. He currently works at Salesforce.
Copyright held by author/owner.
Publication rights licensed to ACM.
they have a small read perspiration,
as a reader typically checks only one
place per level.
Tiering merges have a much lower
write amplification but a larger read
perspiration. Because new files stack
up at each level before merging, there
is less merging and hence less writing.
On the other hand, reads must check
a lot more places, leading to the larger
read perspiration.
There’s a bunch of fun work lately
on the trade-offs of these schemes.
2, 5
Indexing and searching. Search is
in many ways a variation of database
indexing. In database indices, the notion of identity exists hidden within
the database as a row-id or a primary
key. Within a relational system, updates to indices are transactionally
integrated, and the user sees only a
performance difference.
Search systems are a bit different
in that they deal with documents.
Most search systems asynchronously update the search index after the
change to the document occurs. This
is knit together with some form of
document identity.
3
Search makes reading the documents a lot easier. It dramatically lowers the read perspiration. Updates to
the documents asynchronously impose a debt onto the system to get them
indexed. Creating and merging search
indices is a complex job that I think of
as a form of write amplification.
To index, you must scour the corpus to find recently written or updated
documents. Each of these needs to
have an identifier and then must be
processed to locate the search terms
(sometimes called n-grams; https://
en.wikipedia.org/wiki/n-gram). Each
of these many n-grams found in a typical document then needs to be sent
to an indexer that covers one of many
shards. So, the document identifier
now is associated with each term (or
n-gram) located in the searchable document—all of this because the user
did a write or created a document!
I worked for a few years on an Internet-scale search engine and know
how they work. I’m still in awe that all
this machinery can keep up with the
work involved in all that write amplification. It’s a lot of work for each document written—and there are lots and
lots of documents.
Internet-scale search systems clearly
offer excellent and low read perspiration.
Large-scale caches. Lots of big
Internet systems have ginormous
caches. Consider a product catalog at
a big ecommerce retailer. Whenever
anything changes, lots of servers are
updated with the new product description. This makes for a very easy and
fast read in exchange for a lot of writes.
Normalization and denormaliza-tion. Growing up in the relational database world, I was imbued with the
determination to have normalized
data contained in the database. Working to avoid update anomalies was
deemed to be extremely important.
Performing a large number of joins
to get an answer was a small penalty
to pay to ensure the database wasn’t
damaged by an errant update.
Increasingly, I view this as the
equivalent of throwing salt over your
shoulder if you spill some. Yeah…
I’ve seen others do it, but I’m not
sure I should.
Most systems are getting more distributed. Most of these have key-value
pairs containing their data, which is
sharded for scale. By grouping related
data into the value of a pair—typically
in a JSON (JavaScript Object Notation)
representation or something similar—it’s easy to grab the value, perhaps as a string, and squirt it over to
the distant system issuing the request.
If you were to normalize the data in
this big and sharded system, the normalized values would not be on the
same shard together. Doing a distributed join is more annoying than doing
a centralized join.
To cope with this, people superimpose versioning on their data. It’s not
perfect but it’s less challenging than
distributed joins or trying to do massive updates across the denormalized
data. The classic example for the value
of normalization in databases is a denormalized table with employees, their
manager, and their manager’s phone
number.
4 Because the manager’s phone
number is copied in many tables for
many employees, it’s hard to change
it. Increasingly, I see systems store “
as-of” data in their denormalized structures—for example, the manager’s
phone is captured “as-of” June 1.
Large-scale distributed systems put
a lot of pressure on the semantics of a