Closed moish83 closed 2 years ago
Do you have a reproduction case you can share?
I have a background job which gets triggered every once in a while which loads data from DB and stores it in LRU, depending on amount of data changed obviously, more data gets accumulated into the LRU cache, at some point I see the calculatedSize reaching the limit I set, but profiling the nodejs process seems like memory is not getting cleaned.
In order to reproduce, you’ll need to set maxSize, and keep on pushing data in (my objects are ~16KB), I didn’t set max at all, and no TTL. Run it for a while and profile the nodejs process. that’s the simplest scenario I have.
Aha, yes, not setting a max is definitely the key component here. I can reproduce it.
In the meantime, you can work around this by setting a max value. There appears to be a bug with how it removes from the storage when it's using a plain unbounded Array
in place of pre-allocated ArrayBuffer
objects.
I’ll set a max value for the meantime
Published all backport patches and deprecated versions without this patch. Using maxSize
is really unsafe at any speed with this issue. It's not as likely to be a problem if there's a max set, and not as big of a problem, but still a problem any time we prune based on calculatedSize
.
Hello, I’m using the LRU cache module for storing large amount of data in memory, I’ve set the maxSize to 500MB and indeed I see (when calling the calculatedSize) that the LRU module doesn’t breach that limit - however after a while I get an OOM, with JavaScript max heap size reached. it seems like the module only marks the items to delete but for some reason GC doesn’t collect them. Have you observed something like this?