while using the package I realized that failing processing tasks (in my case, it was the deleted file, so nothing to process) are being retried several times.
This is because in Dramatiq (the queue I am using) the default is leading to retrying the failing task 20 times:
https://dramatiq.io/guide.html#message-retries (with the max backoff of 7 days, which leads to task being executed for quite a long time).
My main concern here is not the inconsistency in the behaviour (which might be difficult to maintain between different tools), but rather the suboptimal process when using dramatiq here.
I am expecting the dramatiq not to retry such errors at all (or not more than 3 times, for example).
Do you see this as a valid concern? How'd you suggest solving it?
Greetings @codingjoe ,
while using the package I realized that failing processing tasks (in my case, it was the deleted file, so nothing to process) are being retried several times. This is because in Dramatiq (the queue I am using) the default is leading to retrying the failing task 20 times: https://dramatiq.io/guide.html#message-retries (with the max backoff of 7 days, which leads to task being executed for quite a long time).
Celery has a different strategy and does not do autoretries unless you explicitly specify it: https://docs.celeryq.dev/en/stable/userguide/tasks.html#automatic-retry-for-known-exceptions
My main concern here is not the inconsistency in the behaviour (which might be difficult to maintain between different tools), but rather the suboptimal process when using dramatiq here.
I am expecting the dramatiq not to retry such errors at all (or not more than 3 times, for example).
Do you see this as a valid concern? How'd you suggest solving it?
Cheers, Rust