Open TeqGin opened 1 year ago
It'll evict the data asynchronously. Every cache instance has a "worker" goroutine which does all of the housecleaning. This way, the main get/set path doesn't have to lock the linked list.
To be clear, the cleanup isn't periodic (e.g. every 5 seconds or something), it is triggered by the 4th put, but there's no limit on how large the cache could grow before things are actually evicted.
It'll evict the data asynchronously. Every cache instance has a "worker" goroutine which does all of the housecleaning. This way, the main get/set path doesn't have to lock the linked list.
To be clear, the cleanup isn't periodic (e.g. every 5 seconds or something), it is triggered by the 4th put, but there's no limit on how large the cache could grow before things are actually evicted.
Thanks you. I got it. But by this design wouldn't make the MaxSize() a little confusing? I mean if I set the maxsize to n, actually I don't want the cache capability exceed more than n. So in this situation, say if cache's capability initial as 1000, and someone add 1001 element to the cache, will capability grow to 1500? and after the "worker" goroutine finish its job, will the capability shrink back to 1000? In my case I wish the cache don't grow too much, best to keep the cache in my expected size.
Using buffered channels to limit contention on the hot path is fundamental to this library.
However, are you able to look at the new setable_buffer branch?
It adds a new configuration option, SetableBuffer(INT)
. If you set this to 0, then the max size shoudl be MaxSize + 1.
This could have been achieved by setting the existing PromoteBuffer(int)
configuration option to 0, but that could have significantly hurt the performance of Gets. So I created a new channel for sets, which behaves like the existing promotable buffer, but which can have its own separate capacity.
Note that setting SetableBuffer(0)
will slow down writes even if the cache has room. Personally, I'd recommend at least a small buffer and decreasing your MaxSize accordingly.
Very amazing of your work. Thanks you very much. Yes, I will take a look of the stable_buffer branch.
When I import "github.com/karlseguin/ccache/v3", create a cache which max size is 3. After I add 4 element into cache, cache still remain all of the four element, does set() do not run lru? So in lru we should delete the first one when add the No.4 element, right? any limit of this cache's min size?