Open linus-amg opened 9 years ago
Hmmm the idea is good. But precaution has to be made: the redis TTL feature is applied to a key. It means if this is zset's key, the whole zset will be removed. Afaik Redis doesnt support ttl on value.
you are right, the idea was per item though, not per value, so would that work?
i want to save a new .. lets say.. log entry, but i want it to disappear after 72 hours, so i would do sonething like that:
model = nohm.model 'entries',
properties:
key:
type: 'string'
index: true
ttl: 259200
or
user = new model({ ttl: 259200 })
or
user = new model()
user.ttl(259200).save()
or
user = new model()
user.save({ ttl: 259200 })
AFAICT there is no way to do this without either breaking a lot of other functionality or doing some semi-complex "manual" cleanup and still risk breakage.in some cases.
The problem is that there is no way to get notified of expirations or subscribe to them or something. So when an item expires, it's indices and relations will still be there.
As a workaround you can of course set a property to timestamp, index that property and then manually check every x ms/s/m/h or whatever and call a delete on those that have a timestamp below now. But depending on your requirements that might be an awful solution.
May be worth to quote this new feature : since redis 2.8, there is a new notification system in which you can subscribe to any key's event : http://redis.io/topics/notifications
I actually didn't know about that, thanks.
Doesn't really solve the problem though: If there is no node process running to listen to that event*, there is no one to act on it. And since these events are fire and forget, there is no good way to handle this.
* say, it just crashed and is rebooting now - or it's not even designed to be a running process - just something that is run sometimes and exits after a couple of seconds.
Yeah i was thinking about this problem too. Here a draft about a possible implementation :
That still leaves at least one more problem: If multiple clients with nohm try to "repair-by-delete" after downtime they might run into differences of "opinion" on what objects still exist and what objects do not. Edge-cases, sure... But there are probably more.
The ideal solution in my opinion would be to wait for the apparently possibly planned feature[1] of having lua scripts listen to TTL events and then handle all that in a lua script on redis itself.
[1] last sentence of the first paragraph of http://redis.io/topics/notifications
+1
would be cool to pass a ttl value in the schema of the model and maybe som refresh method somewhere, dont know how yet.