Closed jarthod closed 9 years ago
There's nothing documented. But I can imagine a scenario where the job itself is wrapped in a handler that detects failures and either delays a bit for a specified period of time or re-queues the same job again. The challenger with re-enqueueing would be preventing an infinite loop of retrying jobs.
Wrapping around the failure would give you control of the number of retries, in addition to the delay between each one.
Hope this helps!
On Sunday, August 23, 2015, Adrien Jarthon notifications@github.com wrote:
Hi,
I can't find any native way to retry jobs on error, I didn't find any article or github issue neither. Is there any best practice for this ? (retrying job that fails, with an optional delay) Shall I reschedule another job and keep a retry count as parameter ? Is it better to loop inside the worker ?
Just to be clear I perfectly understand the unreliableness of sucker punch in this case, I just have non critical jobs that may need to be retried for a short period of time (network calls)
Thanks for your help !
— Reply to this email directly or view it on GitHub https://github.com/brandonhilkert/sucker_punch/issues/125.
_Build a Ruby Gem is available! http://brandonhilkert.com/books/build-a-ruby-gem/?utm_source=gmail-sig&utm_medium=email&utm_campaign=gmail_
Ok that's what I thought, thanks !
Hi,
I can't find any native way to retry jobs on error, I didn't find any article or github issue neither. Is there any best practice for this ? (retrying job that fails, with an optional delay) Shall I reschedule another job and keep a retry count as parameter ? Is it better to loop inside the worker ?
Just to be clear I perfectly understand the unreliableness of sucker punch in this case, I just have non critical jobs that may need to be retried for a short period of time (network calls)
Thanks for your help !