Open winhamwr opened 8 years ago
Just curious, how will you implement the locking strategy without causing the project to require a specific backend? I partially copied a task mixin that implements locking for django tasks to achieve a similar purpose https://github.com/PolicyStat/jobtastic/issues/57#issuecomment-249306075
@thenewguy thanks to #63, we now have a pluggable cache backend. The goal is for anyone using memcached or redis to have out of the box support for the locking strategy. Others might need to write a different cache backend, though.
Issue https://github.com/PolicyStat/jobtastic/issues/83 is helpful here
Sometimes, we might want multiple very-similar tasks to run (we can't just drop them or use the result from the first task), but not at the same time. Jobtastic can't currently help with this type of synchronization.
Support types of jobs where we want to prevent them from running at the same time, but we still want to run the other jobs, later.
Simultaneous Execution Prevention
simultaneous_execution_prevention_timeout
option that defaults to0
(off)herd_avoidance
is >0 (active) orcache_duration
is >=0 (active), we should raise an exception if someone tries to also setsimultaneous_execution_prevention_timeout
to >0 (active). They won't play nice together and it was almost certainly someone misunderstanding the docs.Caveats to users based on countdown/eta/delay
This kind of thing can get you into a deadlock state with your queues. Because of the way worker
prefetch_count
, retry, and delay/eta/countdown interact, your retry call with a delay could block an entire pool of workers.Let's say you have one worker pool with a concurrency of 3 and a prefetch_multiplier of 4. Then you queue up 13 jobs with simultaneous execution prevention turned on that all match via
significant_kwargs
. The first one to hit a worker will start running, and then the next 12 will get retried with a delay. Those will then immediately hang out in your worker pool, and since the pool only has 12 "slots" for tasks (3 concurrency times 4 prefetch_multiplier), and since the delay/eta/countdown happens at the worker pool level, the other 2 workers in your pool will have nothing to do. Even though you might be queuing up other jobs that could be run by those 2 workers, they can't get to them, because the pool has already pulled its max amount of jobs.Could we mitigate this?