Open spetoolio opened 2 years ago
did you solve this?
Unfortunately not. Are you experiencing it as well?
When we dropped the TTL on the jobs to be very, very small, it helped a bit. Not solved though. We just clear out the queue of duplicates every so often... it sucks
no just evaluating tech solutions. thank you
to be honest I've decided to push as much queue management as possible out to the clients and try to avoid most backend async tasks. much easier to deal with ensuring end to end completion of a task without data loss, with recoverability, back pressure, accurate status etc.
I have a recurring job that runs a "check" on a certain object. I enqueue a job with a specific job id, built from a job key and the object's UUID, such as
check_object_{object_uuid}
.Then, when the recurring task runs, I make sure that the job isn't already in the queue, if these criteria are true I want to queue up a job:
Here's the code, with different variable/function names:
However, for a reason I cannot determine, duplicates of the task end up being queued, and then duplicates of the duplicates, in an exponential way. I'll see jobs with status
finished
andqueued
, both in the queue, with the same ID.This seems like a bug that multiple jobs with the same ID can be queued up. But, it seems like when I run this function:
It seems that all jobs in the queue, finished queue, or failed queue all get re-queued if they have the same ID. Any advice on how I can manage this?