jaemk / cached

Rust cache structures and easy function memoization
MIT License
1.58k stars 96 forks source link

Could there be a "locked" version? #62

Open bbigras opened 4 years ago

bbigras commented 4 years ago

The function-cache is not locked for the duration of the function's execution, so initial (on an empty cache) concurrent calls of long-running functions with the same arguments will each execute fully and each overwrite the memoized value as they complete. This mirrors the behavior of Python's functools.lru_cache.

I'm guessing it's out of scope of this crate but I'm asking just in case.

I read The Benefits of Microcaching with NGINX and it seems using proxy_cache_lock has some benefit.

And I really like the idea of using #[cached] with functions.

macthestack commented 3 years ago

I have many use cases where this would come in handy. It is a great way of tackling bursts of requests to the same resource...so I'll bump this, just in case :)

jaemk commented 3 years ago

The latest version (0.26.1) adds an option sync_writes to the #[cached] macro to support this - see https://github.com/jaemk/cached/commit/fb88d7f8bbb32f1cf35f91ad0a1dd5357dfd4725

#[cached(size=100, option = true, sync_writes = true)]
fn do_stuff(a: String) -> Option<usize> {
    // some complicated stuff
}
hf29h8sh321 commented 2 years ago

It looks like sync_writes will lock the entire cache. It would be better if 2 operations (with different arguments) would be able to run simultaneously.

jaemk commented 2 years ago

Sure, but that would only be compatible with simple unbounded and timed caches since any LRU/size-enforcement requires exclusive access of the entire cache for reads and writes. If, say, a concurrent_keys option were added, then a specialized expansion of the macro could occur where a specialized (and un-synchronized) cache type is used where no size can be enforced and an additional layer of indirection exists where each cache entry has another layer of synchronization (wrapped their own Mutex) to allow each entry to be written to concurrently. That's not what this issue was requesting though 🙂

hf29h8sh321 commented 2 years ago

I've written a workaround based on the code posted on #81, returning a boxed future from the cached function.