We have some old jobs that have been bouncing around our staging environment for a while, and we've retried them repeatedly with larger sets of failed jobs. As a result, they've got over 17K events in their history. This causes memory bloat -- the history for this job is using up ~ 2 MB of redis memory.
While you can argue that we shouldn't keep retrying the job (and you'd be right) it would also be nice if Qless protected against the history becoming too bloated. What do you think about this?
Have a max-retained-history-events config setting that defaults to something reasonable (say, 50 or 100).
When adding to the history, truncate as needed to keep it under that. Ideally it would always keep the first event plus the last (max - 1) events.
We have some old jobs that have been bouncing around our staging environment for a while, and we've retried them repeatedly with larger sets of failed jobs. As a result, they've got over 17K events in their history. This causes memory bloat -- the history for this job is using up ~ 2 MB of redis memory.
While you can argue that we shouldn't keep retrying the job (and you'd be right) it would also be nice if Qless protected against the history becoming too bloated. What do you think about this?