VictoriaMetrics / fastcache

Fast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead
MIT License
2.12k stars 176 forks source link
cache caching caching-library fast go golang inmemory-cache

Build Status GoDoc Go Report codecov

fastcache - fast thread-safe inmemory cache for big number of entries in Go

Features

Benchmarks

Fastcache performance is compared with BigCache, standard Go map and sync.Map.

GOMAXPROCS=4 go test github.com/VictoriaMetrics/fastcache -bench='Set|Get' -benchtime=10s
goos: linux
goarch: amd64
pkg: github.com/VictoriaMetrics/fastcache
BenchmarkBigCacheSet-4              2000      10566656 ns/op       6.20 MB/s     4660369 B/op          6 allocs/op
BenchmarkBigCacheGet-4              2000       6902694 ns/op       9.49 MB/s      684169 B/op     131076 allocs/op
BenchmarkBigCacheSetGet-4           1000      17579118 ns/op       7.46 MB/s     5046744 B/op     131083 allocs/op
BenchmarkCacheSet-4                 5000       3808874 ns/op      17.21 MB/s        1142 B/op          2 allocs/op
BenchmarkCacheGet-4                 5000       3293849 ns/op      19.90 MB/s        1140 B/op          2 allocs/op
BenchmarkCacheSetGet-4              2000       8456061 ns/op      15.50 MB/s        2857 B/op          5 allocs/op
BenchmarkStdMapSet-4                2000      10559382 ns/op       6.21 MB/s      268413 B/op      65537 allocs/op
BenchmarkStdMapGet-4                5000       2687404 ns/op      24.39 MB/s        2558 B/op         13 allocs/op
BenchmarkStdMapSetGet-4              100     154641257 ns/op       0.85 MB/s      387405 B/op      65558 allocs/op
BenchmarkSyncMapSet-4                500      24703219 ns/op       2.65 MB/s     3426543 B/op     262411 allocs/op
BenchmarkSyncMapGet-4               5000       2265892 ns/op      28.92 MB/s        2545 B/op         79 allocs/op
BenchmarkSyncMapSetGet-4            1000      14595535 ns/op       8.98 MB/s     3417190 B/op     262277 allocs/op

MB/s column here actually means millions of operations per second. As you can see, fastcache is faster than the BigCache in all the cases. fastcache is faster than the standard Go map and sync.Map on workloads with inserts.

Limitations

Architecture details

The cache uses ideas from BigCache:

64KB-sized chunks reduce memory fragmentation and the total memory usage comparing to a single big chunk per bucket. Chunks are allocated off-heap if possible. This reduces total memory usage because GC collects unused memory more frequently without the need in GOGC tweaking.

Users

FAQ

What is the difference between fastcache and other similar caches like BigCache or FreeCache?

Why fastcache doesn't support cache expiration?

Because we don't need cache expiration in VictoriaMetrics. Cached entries inside VictoriaMetrics never expire. They are automatically evicted on cache size overflow.

It is easy to implement cache expiration on top of fastcache by caching values with marshaled deadlines and verifying deadlines after reading these values from the cache.

Why fastcache doesn't support advanced features such as thundering herd protection or callbacks on entries' eviction?

Because these features would complicate the code and would make it slower. Fastcache source code is simple - just copy-paste it and implement the feature you want on top of it.