Closed SoerenSilkjaer closed 2 years ago
This is an interesting thought but I'm not sure how you imagine this feature should work, the caller would have to "announce" that he is computing the value and then others could wait on that future. That is definitely possible but does not much feel to be an essential component of this cache implementation. However I can imagine its benefits.
Unfortunately, this lib is not my priority at this moment but I'm open to new pull requests, are you willing to implement it?
@KarelCemus Currently my company made its own solution to this problem that wraps this library, it would definitely make sense to port the solution inside this library. I will talk to my colleagues about this and get back to you.
Closing for inactivity. If you have spare time to provide this implementation, feel free to reopen this ticket
In an ideal concurrent cache, If multiple concurrent processes get the value at the same time the value will only be computed once, with all of the other processes receiving the computed value as soon as it is available.
Same thing applies when one process is writing to the cache, all other processes who gets that value during the write, should receive the computed value from the first process as soon as it is available.
Without this feature, request bombing can in worst case cause N writes to cache and N reads from the cache.