Tue, Nov 20, 2018 @ 01:00 PM - 03:00 PM
Thomas Lord Department of Computer Science
Title: Asynchronous Writes in Cache Augmented Data Stores
Author: Hieu Nguyen
Date: November 20, 2018 from 1-3pm.
Location: Salvatori Computer Science Center (SAL) Room 213.
Shahram Ghandeharizadeh (Chair)
Abstract: This dissertation explores processing of writes asynchronously in a cache augmented data store while maintaining the atomicity, consistency, isolation, and durability (ACID) properties of transactions. It enables the caching layer to dictate the overall system performance with both read-heavy and write-heavy workloads, motivating alternative implementations of a write-back policy. A write-back policy buffers writes in the cache and uses background threads to apply them to a data store asynchronously. By doing so, it enhances the overall system performance and enables the application throughput to scale linearly as a function of the number of cache servers. The main limitations of this technique are its increased software complexity and additional memory requirement.
When the data store is unavailable, a write-back policy buffers writes as long as the caching layer is available. Once the data store becomes available, it applies the buffered writes to the data store asynchronously. We use this idea to introduce TARDIS, a family of techniques to process writes when the data store is unavailable. TARDIS techniques (TAR, TARD, DIS, and TARDIS) differ in how they apply buffered writes to the data store during the recovery mode. While TAR and DIS are simple to implement, other techniques are more complex. TARDIS is most complex and resembles the write-back policy.
We quantify the tradeoffs associated with the write-back policy and TARDIS family of techniques using YCSB, BG, and TPC-C benchmarks. We compare their performance with alternatives such as write-around and write-through policies that perform writes synchronously. All benchmark results show that asynchronous writes enhance the overall performance and enable the application to scale as a function of the number of nodes in the caching layer. The results also highlight the extra memory required by the proposed techniques to buffer the writes.
Audiences: Everyone Is Invited
Contact: Lizsl De Leon