Closed juledwar closed 3 years ago
Open question: should AbortException cause this to be called or not?
I think that you can get 99% there with the job_failed
signal. It will be triggered for all failed tasks but you can easily inspect the job
given to the callback and call the proper function.
from spinach import signals
@signals.job_failed.connect
def job_failed_handler(namespace, job, err, **kwargs):
callback_fn = {
'task_foo': print,
'task_bar': print
}
try:
callback_fn[job.task_name](job, err)
except KeyError:
pass
Oh I had completely forgotten about the signals. I wonder if we can add some extra logic to use a job → callback_fn map.
I'll close this since signals work. I would rather a callback on each task to save the extra lookup in user code, but it is fine for now.
The idempotent Task feature (max_retries>0) is great, however when the last attempt fails, it can leave underlying application state in limbo. Code that sets up a status to indicate a job is running can set it to something else if the job completes, but not if it fails.
I suggest an extra parameter to
spinach.task.Task()
calledfail_callback
(you may suggest a better name!) which is called once the last retry iteration has passed after a failure of the Task.We may also want to pass optional extra args/kwargs, or we can just pass the original Task args/kwargs (which would be sufficient for my use case). We should potentially also pass the last exception that was caught.
Putting this together, the callback function prototype would need to look like:
Any exceptions raised in the callback function should be logged and ignored.