Closed grgbkr closed 2 years ago
This pull request is being automatically deployed with Vercel (learn more).
To see the status of your deployment, click below or on the icon next to each commit.
🔍 Inspect: https://vercel.com/rocicorp/replicache/CADUarKvPcYHS9hMUWGmjey3LLTz
✅ Preview: https://replicache-git-grgbkr-ssd-lazy-dag-impl-rocicorp.vercel.app
Implements a DAG Store which lazily loads values from a source store and then caches them in an LRU cache. The memory cache for chunks from the source store size is limited to
sourceCacheSizeLimit
bytes, and values are evicted in an LRU fashion. The purpose of this store is to avoid holding the entire client view (i.e. the source store's content) in each client tab's JavaScript heap.This store's heads are independent from the heads of source store, and are only stored in memory.
Chunks which are put with a temp hash (see
isTempHash
) are assumed to not be persisted to the source store and thus are cached separately from the source store chunks. These temp chunks cannot be evicted, and their sizes are not counted towards the source chunk cache size. A temp chunk will be deleted if it is no longer reachable from one of this store's heads.Writes only manipulate the in memory state of this store and do not alter the source store. Thus values must be written to the source store through a separate process (see persist implemented in 7769f09).
Intended use:
See persist implemented in 7769f09. This process gathers all temp chunks from this store, computes real hashes for them and then writes them to the perdag. It then replaces in this dag all the temp chunks written to the source with chunks with permanent hashes and updates heads to reference these permanent hashes instead of the temp hashes. This results in the temp chunks being deleted from this store and the chunks with permanent hashes being placed in this store's LRU cache of source chunks.
Performance On our existing performance benchmarks outperforms the existing mem dag store ( dag.StoreImpl on top of kv.MemStore). The current benchmarks really only test performance of the temp hashes cache though, since they don't use persist at all.
I believe this outperforms the existing mem dag store because the temp hashes cache is just a straightforward Map<Hash, Chunk>, and is thus a bit simpler than dag.StoreImpl on top of kv.MemStore which uses 3 keys per chunk. A follow up is to add some benchmarks that exercise persists and lazy loading.
Part of #671