For a long-running cache with a large number of unique requests, the number of cache keys in memory could start to add up. With the current hash function (sha256), that's roughly 1MB per 6K unique requests (~113B per the hex digest, ~56B per lock object).
Possible solutions would include:
Adding a fixed TTL to each lock and run cleanup after every request
Something like lru-dict, but ideally without adding another dependency.
A wrapper function + functools.lru_cache might be sufficient.
Follow-up from #227.
For a long-running cache with a large number of unique requests, the number of cache keys in memory could start to add up. With the current hash function (sha256), that's roughly 1MB per 6K unique requests (~113B per the hex digest, ~56B per lock object).
Possible solutions would include:
functools.lru_cache
might be sufficient.