marcoCasamento / Hangfire.Redis.StackExchange

HangFire Redis storage based on original (and now unsupported) Hangfire.Redis but using lovely StackExchange.Redis client
Other
452 stars 108 forks source link

Succesful jobs are not cleaned from storage #75

Closed xumix closed 6 years ago

xumix commented 6 years ago

I have 900 000 Succeeded jobs, the list is limited to 10 000 but actual storage is not cleaned This actually prevent my from using GUI tools for redis, since the number of keys and data is so large

marcoCasamento commented 6 years ago

the list limit doesn't govern the amount of jobs that you keep on the storage. it simply set how many jobs are visible from the dashboard. Jobs are cleaned up by expiring them, and the default retention time is 1 day. You can refer to https://discuss.hangfire.io/t/how-to-configure-the-retention-time-of-job/34 in order to change it.

I actually keep jobs for 7 days and that means near to 5mln keys in Redis. Yep, this make tools like redisdesktop almost useless, you may want to take a look at https://github.com/RedisLabsModules/RediSearch

xumix commented 6 years ago

Well, hangfire:*:succeeded list is being cleaned, but not the job keys themself

marcoCasamento commented 6 years ago

exact, this is the expected behaviour. Use JobExpriationTime to cleanup jobs.

xumix commented 6 years ago

@marcoCasamento thanks for fast answer! But I have default expiration time set, which means 1 day

marcoCasamento commented 6 years ago

It'a TimeSpan, set it to whatever make sense for you. I recommend to don't go lower than 1 hour since it would clash with the JobInvisibility. Anything more than 1 hour would be fine.

But... you really want to cleanup jobs this fast ?

marcoCasamento commented 6 years ago

one more thing. The JobExpirationTimeOut is set on job creation, so the jobs that you already have in the storage wouldn't be affected. You can use EXPIRE or DEL command along with SCAN in a LUA script to expire your hangfire:job:* keys (or whatever your namespace is)

xumix commented 6 years ago

@marcoCasamento It is ok for me to clean successful jobs in 1 day, but is looks like they are not expired at all. The number of jobs (and job keys in redis) constantly increases over time, which indicates no retention is made. I can see, that you get the succeeded list, parse it and then perform retention, but if the list is limited to 10 000 (via config), it means that if I have 100 000 jobs executed daily then 90 000 of then will never be cleaned up 2018-03-22 13 51 15 redis 4 0 6 2018-03-22 13 53 11

marcoCasamento commented 6 years ago

No, I see a couple of misconceptions here:

xumix commented 6 years ago

@marcoCasamento Thanks, I see now! You actually manage expiration via redis TTL, that is great 👍

chrismcv commented 4 years ago

@marcoCasamento - on a similar thread - is it intentional that failed/retrying jobs don't have a TTL? Is there a way these get cleaned up? (or is it a bug?)

marcoCasamento commented 4 years ago

@chrismcv yes, it's intentional. In the author of Hangfire view, only jobs that reach a "final state" can be cleaned up, and by default only "Succeeded" and "Deleted" are considered final, so no TTL have to be put on jobs that are not in that state. I believe that you can customize states and maybe you could consider as "final" the "failed" jobs, as far as I know you can customize the state machine and even add further states to it, that however is well beyond the scope of this repo, I suggest you to file a question on the hangfire repo

FixRM commented 1 year ago

@xumix, hi! Can you please share the solution? I'm pretty new with Redis, so will be grateful for direct guidance or documentation links.

xumix commented 1 year ago

@FixRM There is no solution, because the problem does not exist, Job keys expire with Redis TTL mechanics Use proper JobExpriationTime and in will work as expected