Open matteeyah opened 8 months ago
@matteeyah, yes! I've got this on my mind as well! In the end, I'm not sure there's a version of https://github.com/basecamp/solid_queue/issues/105 that I like or that it's useful for what we need, so I thought, in the meantime, to add another parameter to limits_concurrency
to discard instead of blocking. Something like:
on_conflict: :discard
By default, it'd be:
on_conflict: :block
which would be the current behaviour.
That's exactly what I had in mind!
For reference - We're currently using https://github.com/veeqo/activejob-uniqueness. It serves a slightly different purpose, but it also checks for duplicate jobs.
It also has an on_conflict
option - it supports logging a conflict, raising an error on a conflict and passing a proc to handle the conflict manually.
Another vote for following the activejob-uniqueness conventions.
I would note that of the strategies offered by the gem, until_and_while_executing
is the one we have found most-useful.
Is there any more word on this? Or any openness to others trying to contribute?
Hey @nhorton, sorry for the delay! I'm back at this now and would like to get this feature ready for version 1.0.
of the strategies offered by the gem,
until_and_while_executing
is the one we have found most-useful.
I think that would be the only strategy supported because of the way concurrency controls work. Right now jobs don't unblock other jobs or release the semaphore until they complete, and for simplicity, it'd be much easier to keep it that way.
Or any openness to others trying to contribute?
Definitely! 🙌
@rosa Is there an ETA for 1.0 ?
@bilby91, my hope is by the end of August but you know, there are always unexpected things 😅 😅
@rosa Awesome! I look forward to testing this feature.
In the end I won't be able to get this one in for v1.0, but I hope to get it there shortly after. Sorry!
Summary
The concurrency controls that currently exist allow blocking execution of "duplicate" jobs, or jobs with the same arguments. These jobs wait, then execute after the currently running job finishes. There's no way to discard all duplicate jobs and prevent them from running completely.
Proposal
Add a way to discard jobs that get blocked by the concurrency controls.