This is a refactor of the previous cache chunking pull request. Previously, the performance impact of using chunking was too high to consider using, even under its intended conditions (Redis or some other over-the-wire cache where data size is very important). Now:
Chunked cache reads/writes are handled concurrently and utilize shared memory to improve performance, as opposed to sequentially or fully with channels
The package now includes benchmarks for QueryCache and WriteCache, demonstrating the performance impact (which is minimal)
Running the same benchmarks on main has lower numbers in DeltaProxyCache, but this change also moved time series marshaling into cache operations when applicable, rather than afterwards.
If cache chunking is so great...
This is a refactor of the previous cache chunking pull request. Previously, the performance impact of using chunking was too high to consider using, even under its intended conditions (Redis or some other over-the-wire cache where data size is very important). Now:
Running the same benchmarks on
main
has lower numbers in DeltaProxyCache, but this change also moved time series marshaling into cache operations when applicable, rather than afterwards.