Closed ubernostrum closed 1 year ago
So, we actually discussed this previously, but to be honest, I can't recall what the verdict was. @Goldziher?
In any case, there should be a way. I remember raising the point that, from a semantic perspective, cache=True
should cache indefinitely, but that would then make the default expiration require some sort of sentinel value.
Personally I don't particularly care for -1
, as it's not very explicit. We could however add a sentinel value like NO_EXPIRATION
which would achieve the same effect.
Alternatively, and as a workaround, you could just pass something like timedelta(days=365).total_seconds()
to cache for a year (multiply as your heart desires).
Sentinel sounds good
Related issue: #1301
Thanks for the quick turnaround. I assume this will ship in 2.0?
Summary
Currently, the low-level cache access permits setting a value which will be cached indefinitely, by passing
expiration=None
to the cache'sset()
method. However, thecache
argument to route decorators does not acceptNone
(and raises an exception ifcache=None
is passed), and I can find no other mechanism for specifying that the response from a route should be cached indefinitely.Basic Example
It would be possible to write:
and have the response be cached with no explicit expiration.
Drawbacks and Impact
I'm not sure that there would be any negative impact beyond enabling a use case like this; no existing cache functionality would be removed or changed in a backwards incompatible way, as this is purely adding to the cache functionality.
For positive impact, it would be possible to have endpoints that effectively serve out of cache forever.
For a concrete example, I currently have an application which wants to do this due to having a data set that changes rarely and is expensive to query on the fly; I've instrumented its data-update logic to both insert the new data set into the database and also calculate and clear the affected set of cache key(s), after which I'd like the endpoints which serve the data to just query the DB once and then cache "forever" (or until the next time a data update occurs and clears some keys out of the cache).
At the moment I'm simulating this as best I can by doing the caching in my DB query layer. If I could reliably work out how generate the correct Starlite
Response
objects I could just pickle them and do a low-level cacheset()
with the appropriate key and withexpiration=None
, but ideally I would just let Starlite itself generate the responses and cache them with indefinite expiration.Unresolved questions
The main question I can see is what the correct value for the
cache
argument would be. With the low-level cacheset()
method (at least on Redis, which is the cache backend I'm using), a key can be set not to expire by passingexpiration=None
, but passingcache=None
in a route decorator feels like it should have the semantics of not caching at all, rather than of caching indefinitely. So probably some kind of special sentinel value would be needed which could be passed in -- perhaps something like:I believe internally Redis implements a non-expiring key as having an expiration "timestamp" of
-1
, so perhaps either that value, or a named constant holding that value, would work.