Closed lb1mg closed 1 year ago
That's the intended behavior for now. Currently cache expiration just means the response is "stale", and it isn't automatically deleted, it's just overwritten when a fresh response is retrieved. These can be explicitly removed with session.cache.delete_expired_responses()
.
What you're probably looking for is backend-specific TTL integration (EXPIRE
command for Redis, TTL index for MongoDB, etc.). requests-cache has this, for example, but it hasn't been implemented for this library.
Thanks for clarification!
Flagging cached url "stale" for expiration is great but session.cache.get_urls()
should either:
1) return only unexpired cached urls
2) or should provide an option for it
as getting all urls (mixed with stale ones) from the beginning doesn't seem to be much helpful.
session.cache.get_urls(include_stale=False)
: maybe something like this.
Well, for TTL I guess it's okay as it can be manually set (at least in case of Redis).
Yeah, I agree. How I handled this in requests-cache was adding a filter() method for filtering responses, and then using the same parameters to filter URLs. The usage is pretty much the same as in your example.
I might not have time to implement this right now, but I added #166 for this.
Another thing worth noting is that the relationship between cached response expiration (aka "staleness") and backend expiration (aka "TTL") becomes a little fuzzier if conditional requests (#79) and Cache-Control: stale-if-error
(#39) are implemented, since those introduce conditions where stale responses can be reused.
expiring keys is working in practice but keys still remain in Redis and maybe as a result
session.cache.get_urls()
returns all urls from way back even those that have expired. Kindly correct me if I am doing something wrong. Thanks