karlseguin / ccache

A golang LRU Cache for high concurrency
MIT License
1.28k stars 119 forks source link

garbage collection "unbearable" for millions of entries with different entry sizes #31

Open hiqbn opened 5 years ago

hiqbn commented 5 years ago

the program is good. running fairly well but i saw the code and realised the 350 bytes associated with it.

maybe you can look into https://github.com/allegro/bigcache

and see if can reduce what you have written to something less gc intensive (you did your own gc() inside your code which seems to be highly CPU intensive given data cache of 16GB RAM of around 1 million entries... that's very "slow" at times with huge GC processing i think)

any ways to look into the code and speed it up taking GC into consideration and reducing the 350bytes further?

hiqbn commented 5 years ago

otherwise, any suggestion on how I can reduce load of gc by changing my code (which i think is not the problem) but maybe changing your code manually?

hiqbn commented 5 years ago

i'm using mostly the layeredcache feature / function

karlseguin commented 5 years ago

Hmmm.. Thanks. I'll have to take a look and think more about it. Off the top of my head though, the default itemsToPrune is 500, so a smaller value would obviously result in a faster gc, but at the cost of the gc being called more often.

As for the memory overhead..I did look at that before. I can look again, but I think what I concluded last time is that this is the cost of having a very generic cache. The Item struct is pretty fat because of features like tracking/refCounting, delayed promotions, per item sizing, layered caching and so on. Not sure how to solve this without duplicating a ton of code and adding a TrackingCache and TrackingLayeredCache and every possible permutation...