Closed alexcastano closed 6 years ago
Oh hey @alexcastano! It seems to be a reasonable feature request! My questions are:
json
, etc ? Hi @edgurgel,
The first question I would say application and worker level. In my case, application because I want all the jobs retry faster in general, and worker to special cases, I don't use queues at all. However, you are the owner you know better I'm just trying to give you my point :)
I don't exactly the format is used in Redis, but if there is metadata field, we can just include:
backoff_function: [module, function]
Arguments are not needed because they are always: failed_at
, retry_count
.
Glad to help you
I was wondering if we should just make it so that the worker module defines a callback function retry_at(failed_at, retry_count)
and we just check if it's exported
when we need to retry?
:erlang.function_exported(MyWorker, :retry_at, 2)
and fall backing to a configurable Application wide configuration? Otherwise use the standard?
This way we avoid adding extra information to the stored job
@edgurgel sounds perfect, it makes a lot of sense. I'll send you a pull request when I finish it.
Do you have a way to communicate easily if I have any doubt? ie: slack or gitter?
Oh hey! Because of timezones (UTC+12) we may have an async communication but I will checking on gitter: https://gitter.im/edgurgel/verk
I was reading the code and I concluded that Verk.RetrySet.calculate_retry_at/2 is the function which calculates when a failed job is retried. Is there a way to customize this function? It does not make sense to retry some jobs after several seconds.
If there is not a way, maybe it could be useful to implement this feature in a similar way of max_retry_count:
If it is not very difficult and you think it is a good feature maybe I can try to implement.
Thank you for your time