And... our program will actually get stuck on this call!
This is because the coroutines in Worker.abort call has already published the event to Redis, and due to concurrent work, this happened faster than the coroutines inside Job.refresh started listening to the events.
This issue affects the usefulness of the after_process hook, as it prevents us from reliably knowing when a job has terminated (which is the most important information to have in order to make decisions in the after_process ).
Note that for the Status.COMPLETE and Status.FAILED statuses, this already works, as the statuses are updated sequentially before the after_process is called.
However, this does not work for the Status.ABORTED status.
It might make sense to guarantee that all TERMINAL_STATUSES are set, if they have occurred, before calling the after_process handler?
During my usage of
saq
, I found that theafter_process
handler is not behaving as I initially expected.For example, here's the code:
I expected it to print:
status Status.ABORTED
But the actual output is:
status Status.ACTIVE
Okay, if the job is still active, we can try to wait for the job to complete:
And... our program will actually get stuck on this call! This is because the coroutines in
Worker.abort
call has already published the event to Redis, and due to concurrent work, this happened faster than the coroutines insideJob.refresh
started listening to the events.This issue affects the usefulness of the
after_process
hook, as it prevents us from reliably knowing when a job has terminated (which is the most important information to have in order to make decisions in theafter_process
).Note that for the
Status.COMPLETE
andStatus.FAILED
statuses, this already works, as the statuses are updated sequentially before theafter_process
is called.However, this does not work for the
Status.ABORTED
status.It might make sense to guarantee that all
TERMINAL_STATUSES
are set, if they have occurred, before calling theafter_process
handler?