Open Kaushal28 opened 1 year ago
You need to specify HardMaxCacheSize
in order to keep memory limited. Without it cache will only grow.
https://github.com/allegro/bigcache/blob/caa34156bfeab3aa547e18c21292736bbd2ae2c2/config.go#L25-L32
https://github.com/allegro/bigcache/blob/caa34156bfeab3aa547e18c21292736bbd2ae2c2/config.go#L57-L67
@janisz, but why cache will only grow even if the number of key/value pairs is constant all the time and the content (both key and values) that I am setting again after removing is also exactly the same? I understand that there might be some additional space requirements by library itself for internal handling (overhead as mentioned in the comments of HardMaxCacheSize
above) but shouldn't it remain almost constant if the key values are the same overtime? Why am I seeing GBs of fluctuations as I do more and more Set() and Removes()?
I also tried the same above script with various other Go in-memory caching libraries like go-cache, ttl-cache and even the native Go maps. Non of them are showing this much fluctuation. Also, the total memory footprint is also way higher in bigcache. So I was wondering if there is any specific reason for that?
I could find many other similar open issues/bug stating almost the same problem as I mentioned:
https://github.com/allegro/bigcache/issues/311 https://github.com/allegro/bigcache/issues/353 https://github.com/allegro/bigcache/issues/309 https://github.com/allegro/bigcache/issues/109
From all these, it seems that when over writing existing key with same value or deleting keys would not actually delete the existing value and will keep adding new content to the cache and it's intentional. So we must set max hard limit to cache size
HardMaxCacheSize
. Is it right? If what I mentioned is correct and that's intentional, may I know the exact reason behind this design choice? Thanks.
@janisz
I initially set a 10k keys and then for multiple times, I cleared half of them and set them again. In total the number of keys are constant (10k) all the time and all the key names and values are exactly the same. I just
Delete()
andSet()
the same keys multiple times on the same bigcache instance.When I did this 5 times, the memory profile (pprof) showed total of ~396MB memory being allocated at the end. But When I did the same 25 time, it showed ~2.26GB memory still allocated. Since I am clearing and then re-setting the key value pairs, shouldn't the memory utilization be same irrespective of number of time I do it as long as the key-value pair content and number of key-values are exactly same?
Why memory utilization increases as I do more
Delete
andSet
?Here is the script that I used to do it for better understanding and reproducibility:
Note that I am using
github.com/allegro/bigcache/v2 v2.2.5
and this profiling is fromhttp://localhost:6060/debug/pprof/heap
Here is the mem profile result when I did it 25 times:
Here is the mem profile result when I did it 5 times:
Notice the difference in the memory utilization above. Is that a sign on memory leak? Please help.