Closed npezolano closed 1 year ago
https://github.com/tkem/cachetools/pull/277/commits/7677e5076578b6eb4a3c1a7caf89a79809035d8b looks to fix the issues I'm facing above but causing a small performance impact
@npezolano: the title of this issue "not working as expected with locks" is something that I would have to admit is probably true, sorry. However, it is AFAIK the same behavior the standard library's @lru_cache decorator shows, and any PRs trying to fix this have been rejected due to added complexity and runtime overhead. See #224 for a somewhat thorough discussion. So, sorry, unless someone comes up with some truly ingenious idea, this will be left open.
On a final note: Yes, it's an issue, probably an issue that should be at least stated in the docs. However, if this is a real issue for your individual use case, it's probably also an issue in your back-end service, i.e. this would arise even without caching (probably more so), and maybe should be handled there...
I'm sure removing thread safety everywhere in the documentation will fix the issue @tkem
The below example runs, however it seems like it's not working as expected. In the below example
calc
will get called multiple times for the same n when it should be locked by a thread and then cached, I tried using bothLock
andRLock
and get the same result. The cache does seem to be working if I rerunres = obj.multithreaded_run()
below everything will be cached, the problem looks to be if the function to be cached takes up to a few seconds to run and its required by multiple threads it won't lock properly. The problem gets worse if you use more threads. This is a follow up on the previous suggestion in #274See screenshot for example: each n should only print once: