dgryski / go-tinylfu

TinyLFU cache admission policy
MIT License
252 stars 34 forks source link

cache line optimized CountMin4 #6

Open ben-manes opened 2 years ago

ben-manes commented 2 years ago

The original frequency sketch in Caffeine would distribute the counters uniformly across the table. This had the benefit of giving the greatest opportunity for avoiding hash collisions, which could increase the error rate for popularity estimates. However this also meant that each counter was at an unpredictable location and very likely to cause a hardware cache miss. This was somewhat mitigated due to the maintenance performing a batch of work, so temporal locality for popular items may be helpful.

The improved sketch uniformly selects a 64 byte block from which all of an item's counters reside. This is the size of the L1 cache line so only one memory access is needed. The counters are selected from 16 byte segments using a secondary hash. In my benchmarks this improves the frequency estimation and increment throughput by up to 2.5x. The eviction throughput is 17% faster, but the get/put is more difficult to observe since that the maintenance is non-blocking (observable on a low core CI, less so elsewhere).

Java does not yet support using SIMD instructions (preview), but that could be used for an additional speed. A .NET developer ported an earlier prototype where he https://github.com/bitfaster/BitFaster.Caching/pull/271 the operation times were cut in half.

This is just for fun, feel free to close if not interested. It is unlikely that users will observe users a speedup, but was a fun optimization that I forgot to use.