Open bbigras opened 4 years ago
I have many use cases where this would come in handy. It is a great way of tackling bursts of requests to the same resource...so I'll bump this, just in case :)
The latest version (0.26.1
) adds an option sync_writes
to the #[cached]
macro to support this - see https://github.com/jaemk/cached/commit/fb88d7f8bbb32f1cf35f91ad0a1dd5357dfd4725
#[cached(size=100, option = true, sync_writes = true)]
fn do_stuff(a: String) -> Option<usize> {
// some complicated stuff
}
It looks like sync_writes
will lock the entire cache. It would be better if 2 operations (with different arguments) would be able to run simultaneously.
Sure, but that would only be compatible with simple unbounded and timed caches since any LRU/size-enforcement requires exclusive access of the entire cache for reads and writes. If, say, a concurrent_keys
option were added, then a specialized expansion of the macro could occur where a specialized (and un-synchronized) cache type is used where no size can be enforced and an additional layer of indirection exists where each cache entry has another layer of synchronization (wrapped their own Mutex
) to allow each entry to be written to concurrently. That's not what this issue was requesting though 🙂
I've written a workaround based on the code posted on #81, returning a boxed future from the cached function.
I'm guessing it's out of scope of this crate but I'm asking just in case.
I read The Benefits of Microcaching with NGINX and it seems using
proxy_cache_lock
has some benefit.And I really like the idea of using
#[cached]
with functions.