Open biancapetrica opened 3 weeks ago
Sorry for the delay. I took a look at this yesterday, and I agree with the conclusion.
What I think is happening is that the underlying map can grow beyond the specified max size. Specifically, because the cleaner is running in a background goroutine, the key=>value map can grow to be quite a bit larger than the maximum size. It's possible to insert faster than the cache can enforce the limit. Go's map implementation doesn't free memory, so any spike in size stays. This can be made worse via bad key distribution, where one bucket might grow larger.
I'm not positive this is the only issue though, since it doesn't seem to plateau (though the leak does seem to slow down, so maybe it eventually does).
ccache is pretty old. It's interesting to revisit the algorithm, improve the performance, memory footprint and these types of issues. But, there are a lot of newer and better alternatives available nowadays, so I'm not sure it's really worth it.
Thanks for looking into this and sharing your insights! Do you have any recommendations for alternative cache implementations that could serve well as a layered cache in place of CCache? I'd be interested in exploring options with a strong focus on memory efficiency and performance.
I don't have any recommendation for a cache that allows specifying a group key, sorry.
If you are interest about "memory efficient" cache in golang, please let me recommend my https://github.com/phuslu/lru
It's the most minimized and efficient memory implementation in go.
PS: there is a nim-lang port in https://github.com/status-im/nim-minilru and it explains the reason.
During memory usage profiling tests of both CCache V2 and V3, I observed an increase in memory usage beyond expected levels, even when the cache size is capped at 1 million elements. This behavior is not observed in comparison with Ristretto, where the memory usage remains stable once the cache size limit is reached.
The increasing memory usage suggests a potential memory leak in CCache. Below are the test results for both CCache versions and Ristretto for comparison.
Steps to Reproduce:
CCache V2:
CCache V3:
Ristretto:
While Ristretto’s memory usage remains capped when the cache reaches 1 million items, CCache V2 and V3 show significant memory growth even though the cache length is restricted to 1 million items.
Expected Behavior: Memory usage should stabilize once the cache reaches the set limit of 1 million items, similar to the behavior seen in Ristretto.
Actual Behavior: Memory usage continues to increase in CCache V2 and V3, suggesting a potential memory leak; items dropped from the cache may not be properly cleaned up, resulting in continuous memory growth. If any further tests or logs are needed, I would be happy to provide them.