Open epicwhale opened 8 months ago
Seems like this is working as intended: arq/connections.py#L148-L150
I remember getting caught by this, too. I was able to work around it by having it not store results at all, but I don't know if that's an option in your case. Seems like you'll need to clear the previous run's results before enqueueing again.
@joshwilson-dbx I ended up with the same workaround, configuring the worker job to not save any results at all.
I guess the issue here then is that the docs need to be rephrased from "It guarantees that a job with a particular ID cannot be enqueued again until its execution has finished." to probably something like ... "It guarantees that a job with a particular ID cannot be enqueued again while it is still running or until the result of the job is cleared"
I encountered the same issue in #430. So yes, it can work without saving the results
Looks like there's a pull request on this doc update, that is waiting to be published. Could someone help publish it?
https://github.com/samuelcolvin/arq/commit/e0cd916988ebed6d01c26a4d3e9128aa2bf22a7d
I encountered a bug with trying to use keep_result=0 in this scenario.
If the job has max_tries=1
set, and it's retried - the results end up getting saved in redis. This means, the job won't get queued again, as long as that result is not cleared :-/
The docs for _job_id state: "It guarantees that a job with a particular ID cannot be enqueued again until its execution has finished."
But I am get a None response from enqueue_job() for the same job_id even after the execution of the job is complete? (the output of job execution is visible both in worker output and in a result key in redis. If I delete the redis key, I am able to enqueue_job again....
Is this working as intended?
To reproduce:
demo.py
Console:
Worker output: