Open Matias-Barrios opened 1 year ago
Could you share your config and example payload to help reproduce it?
Sure. Here is my constructor:
func NewBigCache(config *config.Config, datadog dd.ClientInterface) IBigCache {
bcconfig := bigcache.Config{
Shards: 1024,
LifeWindow: 300 * time.Second,
CleanWindow: 5 * time.Minute,
MaxEntriesInWindow: 1000 * 10 * 60,
MaxEntrySize: 500,
Verbose: true,
HardMaxCacheSize: config.BigCache.MaxMB, // 600MB
OnRemove: nil,
OnRemoveWithReason: nil,
}
cache, initErr := bigcache.NewBigCache(bcconfig)
if initErr != nil {
log.Fatal().Msgf("error bigcache init: %s", initErr.Error())
}
go func() {
for {
time.Sleep(time.Duration(5) * time.Second)
datadog.Histogram("bigcache_keys", float64(cache.Len()), []string{}, 1)
datadog.Histogram("bigcache_hits", float64(cache.Stats().Hits), []string{}, 1)
datadog.Histogram("bigcache_misses", float64(cache.Stats().Misses), []string{}, 1)
}
}()
return BigCache{
cache: cache,
enabled: config.BigCache.Enabled, // true
datadog: datadog,
}
}
And here is getting a key:
func (b BigCache) GetVendors(ctx context.Context, vendors []string) ([]models.Vendor, []string) {
span, newCtx := tracer.StartSpanFromContext(ctx, "bigcache.GetVendors", tracer.ResourceName("bigcache"))
ctx = newCtx
defer span.Finish()
if !b.enabled {
return []models.Vendor{}, vendors
}
leftOver := make([]string, 0, len(vendors))
out := make([]models.Vendor, 0, len(vendors))
for _, v := range vendors {
key := b.GetKeyName(v)
value, err := b.cache.Get(key)
if err != nil {
leftOver = append(leftOver, v)
continue
}
tmp := models.Vendor{}
err = json.Unmarshal(value, &tmp)
if err != nil {
log.Ctx(ctx).Error().Msgf("error while unmarhsaling for key from BigCache %s: %s", key, err.Error())
leftOver = append(leftOver, v)
continue
}
out = append(out, tmp)
}
return out, leftOver
}
OK, looks like a bug. If I understand correctly, bigcache allocates array every 1 ms but instead of growing array (which will be expected) it allocates the same size. What is not clear for me is the HardMaxCacheSize
what's exact value passed here? Maybe it's a typo and instead of 600MB we have 614400
and that will explain why it does so. If this is the case then we should add a check if we are really growing the array and if not skip it and do not display confusing message.
if q.capacity == q.maxCapacity {
return
}
In that value I am passing this number: 600. It was meant to use 600MB of RAM to cache stuff. Is not a plain int of megabytes I have to provide there?
What is the issue you are having? I have deployed the version 3.0.2 and I thought this issue: https://github.com/allegro/bigcache/issues/117 meant it was solved. I have a constant flow of the below messages on my logs.
Can you help me understand how to fix this?
What is BigCache doing that it shouldn't? It should not log this every few milliseconds :
Minimal, Complete, and Verifiable Example
When asking a question about a problem caused by your code, you will get much better answers if you provide code we can use to reproduce the problem. That code should be...
For more information on how to provide an MCVE, please see the Stack Overflow documentation.
Environment:
/etc/os-release
or winver.exe): alpine3.15