Closed suonlight closed 4 years ago
The one problem I see with this approach is: if LockError occurs on the N-th job of jobs.outgoing
inside the loop, the job will retry enqueuing even the jobs that were correctly enqueued on the previous run and we might have duplicate executions of downstream jobs.
Another approach to solve it would be to move the body of the block of the loop to a separate job (ScheduleOutgoingJob) and move the RedisMutex block into it. That way when it fails, ActiveJob/Sidekiq can retry that job on its own a bunch of times.
If the outgoing jobs were correctly enqueued, then the retry job just wastes time to loop on the outgoing jobs without doing anything. Because we have a check ready_to_start?
.
The approach ScheduleOutgoingJob
looks promising, I will try it by another PR.
Good point, so your solution might be enough as it is :)
Let's see if this works for people, if not - we can revisit with a separate ScheduleOutgoingJob
I'll try to release it today, if time allows, thanks for your first and very important contribution :fireworks:
Summary
This commit is trying to fix issue RedisMutex::LockError on https://github.com/chaps-io/gush/issues/57. Whenever we have the error, it simply enqueues another worker. At the begin of the worker, if the job is succedeed, we simply call
enqueue_outgoing_jobs
again.