Open AlexBlack772 opened 3 years ago
Hello. Thanks for submitting this bug.
I slightly modified your example to run pprof
on it and below are the results
Showing nodes accounting for 270.35MB, 99.82% of 270.85MB total
Dropped 7 nodes (cum <= 1.35MB)
Showing top 10 nodes out of 17
flat | flat% | sum% | cum | cum% | |
---|---|---|---|---|---|
135.54MB | 50.04% | 50.04% | 202.56MB | 74.79% | github.com/allegro/bigcache/v3.initNewShard |
67.02MB | 24.74% | 74.79% | 67.02MB | 24.74% | github.com/allegro/bigcache/v3/queue.NewBytesQueue (inline) |
65.29MB | 24.10% | 98.89% | 65.29MB | 24.10% | github.com/allegro/bigcache/v3.wrapEntry |
2.50MB | 0.92% | 99.82% | 2.50MB | 0.92% | runtime.allocm |
0 | 0% | 99.82% | 65.29MB | 24.10% | github.com/allegro/bigcache/v3.(*BigCache).Set |
0 | 0% | 99.82% | 65.29MB | 24.10% | github.com/allegro/bigcache/v3.(*cacheShard).set |
0 | 0% | 99.82% | 202.56MB | 74.79% | github.com/allegro/bigcache/v3.NewBigCache (inline) |
0 | 0% | 99.82% | 202.56MB | 74.79% | github.com/allegro/bigcache/v3.newBigCache |
0 | 0% | 99.82% | 267.84MB | 98.89% | main.main |
0 | 0% | 99.82% | 267.84MB | 98.89% | runtime.main |
As you can see underlying bytes queue is correctly allocated and all additional data goes from Set
function where we wrap entry and 1024 shards that were created.
I understand that documentation might be misleading. Would you like to fix it? add information that this limit only apply to bytes queue (not shards) and real memory usage depends of number of shards, entries and HardMaxCacheSize
Thank you for explanation and additional tests.
In seems, there is a dependency between cache size and shards count. Could anybody clarify how to predict memory consumption with different values of shards count?
Probably the easiest option will be to create some tests for it. Here are some hints where memory is allocated:
HardMaxCacheSize
- limits memory used by BytesQueue
Shards
- every shard consumes additional memory for map of keys and statistics (map[uint64]uint32
) the size of this map is equal to number of entries in cache ~ 2×(64+32)×n
bits + overhead or map itself.@janisz https://github.com/allegro/bigcache/pull/298 I modified the documentation. Could you review the change?
What is the issue you are having? Despite the limitation of the maximum cache size of 64 megabytes, the program consumes considerably more memory
I'se set HardMaxCacheSize to 64Mb (see a code below), but after programm run vmRSS was much more than 64Mb. In a real setup I've seen cache size more than 5Gb and programm was killed by OOM killer
keys count = 1 vmRSS: Pre run 19 890 176 vmRSS: Post run 20 430 848
keys count = 100 vmRSS: Pre run 19 890 176 vmRSS: Post run 74 498 048
keys count = 1000 vmRSS: Pre run 198 86 080 vmRSS: Post run 385 257 472
keys count = 5000 vmRSS: Pre run 19 894 272 vmRSS: Post run 556 134 400
What is BigCache doing that it shouldn't? Consumes a lot of memory
Minimal, Complete, and Verifiable Example
Environment: go version go1.17 linux/amd64