The result of the #7 benchmark for WorkerPool and Cache implementations showed a significant increase in performance when using Otter https://maypok86.github.io/otter/ over a standard LRU cache implementation. This PR gives users the option of using either the Mutex or Otter cache implementations
Implementation
Removed WorkerPool implementation as that showed the worst performance
Introduced CacheManager which takes a similar role to the WorkerPool and provides an abstraction point for possible future management of cache types.
Renamed LRUCacheCollector to CacheCollector
Fixed some linting issues
algorithms.go functions now lock a rate limit before modifying the CacheItem. This avoids race conditions created when using a lock free cache like Otter.
Moved cache expiration out of the cache and into algorithms.go. This reduces the garbage collection burden by no longer dropping expired cache items from the cache. Now, if an item is expired, it remains in the cache until normal cache sweep clears it, or it's accessed again. If it's accessed again, the existing item is updated and gets a new expiration time.
Introduced rateContext struct which encapsulates all the state that must pass between several functions in algorithms.go
The major functions in algorithms.go now call themselves recursively in order to retry when a race condition occurs. Race conditions can occur when using lock less data structures like Otter. When this happens, we simply retry the method by calling it recursively. This is a common pattern, often used by prometheus metrics.
Switched benchmarks to use b.RunParallel() when preforming concurrent benchmarks.
Added TestHighContentionFromStore() to trigger race conditions in algorithms.go which also increases code coverage.
Removed direct dependence upon prometheus from Otter and LRUCache. (Fixed flapping test)
Added GUBER_CACHE_PROVIDER which defaults to otter
Purpose
The result of the #7 benchmark for WorkerPool and Cache implementations showed a significant increase in performance when using Otter https://maypok86.github.io/otter/ over a standard LRU cache implementation. This PR gives users the option of using either the Mutex or Otter cache implementations
Implementation
WorkerPool
implementation as that showed the worst performanceCacheManager
which takes a similar role to theWorkerPool
and provides an abstraction point for possible future management of cache types.LRUCacheCollector
toCacheCollector
algorithms.go
functions now lock a rate limit before modifying theCacheItem
. This avoids race conditions created when using a lock free cache like Otter.algorithms.go
. This reduces the garbage collection burden by no longer dropping expired cache items from the cache. Now, if an item is expired, it remains in the cache until normal cache sweep clears it, or it's accessed again. If it's accessed again, the existing item is updated and gets a new expiration time.rateContext
struct which encapsulates all the state that must pass between several functions inalgorithms.go
algorithms.go
now call themselves recursively in order to retry when a race condition occurs. Race conditions can occur when using lock less data structures like Otter. When this happens, we simply retry the method by calling it recursively. This is a common pattern, often used by prometheus metrics.b.RunParallel()
when preforming concurrent benchmarks.TestHighContentionFromStore()
to trigger race conditions inalgorithms.go
which also increases code coverage.GUBER_CACHE_PROVIDER
which defaults tootter