Closed delaneyj closed 9 years ago
We haven't implemented any cache recycling strategy yet. Would you please briefly describe your use case so we can evaluate posible solutions?
Well using a local store like you would with a Redis cache but without the network roundtrips and reliance of in memory. I'm considering using cached-request in conjuction with EventStore's HTTP interface. Because of the indefinite caching ability of the immutable storage I was thinking it might make sense to have limits on its usage for my GraphQL based Node.js frontends (where cached-request would be used).
Redis has a pretty good writeup on how they implement their LRU. My guess for cached-request is it would be a modified version of the TTL support you already have in place. Store the cache result and set a time-stamp, when above the set limit remove the cache key with the oldest timestamp until back under disk limits. Not sure if just touching the cached files would work as a sliding window or if having the timestamps in a sorted set in memory makes more sense.
I'm just starting the process so haven't hit any limits but having some constraints for production will be a necessity as I'm pretty sure could saturate the disks of the controller instances.
I sort of understand what you need. However, implementing a good disk based LRU that is persistent across sessions/processes isn't something easy to do. We don't have plans to implement this in the near future.
One of our plans for the next major version is to make cached-request
to use stores rather than caching on disk by default. We will notify you when this new version comes out, so maybe you could implement your own LRU store and make cached-request
work on top of it.
Anyway to setup with LRU to limit the disk usage?