Closed marcus-at-localhost closed 2 years ago
Would it be sufficient to list / filter cached entries, select the ones matching some URL / string and flush those?
there is an (yet) undocumented bnomei.lapse.indexLimit
options. defaults to null
but you can set an integer. the cache will not store more entries than that number then by FIFO. also using lapse with caches like apcu enforce a total entry number by imposing a total memory limit.
deleting via groups would mean adding that information somewhere and i would like to avoid that overhead and keep it as simple and fast as possible. i do like the idea though. thanks for your feedback.
There is no way in preventing flooding the cache if setup wrong, right? This is a dumb example, but should get across the point
This would fill the cache with each request. And there is not really a way to flush the cache for that specific "domain", once the $key is not known anymore
Here is another example that illustrates it better:
Now if I want to clear the cache, I have to delete and rebuild the complete cache,
I've seen caching mechanism that implemented caching groups, in order to have granular control over what to flush.
In my example, I could have a cache group by
$feedurl
and single entries ofcontent-length
ormodifiedDate
whatever. And then be able to say - flush all cache entries of group$feedurl
here is some pseudocode:
Would it make sense to implement it that way?