Closed liangyuanpeng closed 3 years ago
What do you need LRU specifically for? If you want to bound the cache then we offer that.
We offer an eviction policy but do not promise that it will remove in LRU order. Since our goal is to maximize the hit rate, the policy will evaluate your workload and tune itself. That might be LRU, LFU, or even MRU. It might change over the lifetime of the application, e.g. user requests during the day and batch jobs at night. We allow for high level configuration, like the maximum size or expiration time, but don't make promises about how the underlying algorithms function.
What do you need LRU specifically for? If you want to bound the cache then we offer that.
We offer an eviction policy but do not promise that it will remove in LRU order. Since our goal is to maximize the hit rate, the policy will evaluate your workload and tune itself. That might be LRU, LFU, or even MRU. It might change over the lifetime of the application, e.g. user requests during the day and batch jobs at night. We allow for high level configuration, like the maximum size or expiration time, but don't make promises about how the underlying algorithms function.
Thank you for such a quick reply.
It can be simply understood as a log system for viewing the latest logs, such as the last 20 logs and It would be great if caffeine can provide an option to choose LRU,otherwise I may need to use Guava and Caffeine at the same time.
That might be FIFO, depending on what the behavior you need is. LRU will retain the most recently accessed entries, whereas FIFO retains by insertion order.
When concurrency comes into play then the order of operations can become non-deterministic. That is often not desirable for business logic algorithms which might depend on the ordering for correctness. For a cache, being close to LRU is okay because the external contract is how many entries are retained and the order is not exposed. For example Guava partitions itself into multiple LRU maps (default 4) and does not offer iteration order. LinkedHashMap
explicitly does offer access/insertion order in its contract and iterators, but is therefore not concurrent. You might have opposing requirements.
Unfortunately exposing LRU or similar is not the goal of this library, so that won't be offered. Our contract is to provide concurrent access to a bounded map, and we optimize our predictions to minimize the number of misses. The algorithms are not exposed in any API contracts and only described in design documents, to show why our implementation is good. But for your business logic, I am unsure if a cache will be the right metaphor because even if an implementation fits, the intent might be very different.
You might be interested in the older ConcurrentLinkedHashMap
project, which is a concurrent LRU, but again I'm not certain a cache is what you want if you need to rely on LRU semantics.
Hey Ben,
First off thanks for the excellent cache. It's proving wonderful in our use still.
Some point , i just want to use lru cache and i can't achieve this effect if i just use caffeine, right?
Thanks any help!