Hey there, I'm evaluating this library and loving it so far, great job! I've run into a question I can't clearly answer based on the docs so was hoping someone here could help.
In my current setup I'm initializing a single PgBoss instance and configuring it to process jobs from multiple queues - one call to .work() for each queue name. It's my understanding that each of these "workers" will poll from their respective queues and process jobs independently of each other. So far, exactly what I want.
What I want to do next is constrain the max number of active jobs being processed across all the queues/workers. For example, if I want to process jobs from 20 different queues I could have a situation where jobs exist in each queue and my single PgBoss instance will then run 20 different handlers concurrently, potentially overwhelming the host resources and all the issues that come with that.
I'd love to have the simplicity of one instance on one machine responsible for multiple queues, but able to throttle across queues so that only X number of different jobs can run concurrently.
Ideal setup:
Initialize PgBoss with a max global concurrency of 3
Start work for 5 different Queues A, B, C, D and E
1 job exists on each queue jobA, jobB, jobC, jobD, jobE
PgBoss instance polls the DB and sees these 5 jobs
Normally it would fetch all 5 jobs and run all 5 concurrently on their respective handler
Global concurrency setting overrides this behavior and only fetches jobA, jobB and jobC (or any other combo as long as it's 3 jobs max)
On the next poll interval jobA and jobB have been processed, jobC is still going
Two new jobs are fetched to fill the remaining global capacity of 3 (from any queue that isn't jobC)
Is there an existing config option to achieve this? Hopefully this is clear, but if not let me know!
If the max concurrency you need is low, you could just use a single worker against 1 global queue, then store your "queue" as a property on the data payload.
Hey there, I'm evaluating this library and loving it so far, great job! I've run into a question I can't clearly answer based on the docs so was hoping someone here could help.
In my current setup I'm initializing a single
PgBoss
instance and configuring it to process jobs from multiple queues - one call to.work()
for each queue name. It's my understanding that each of these "workers" will poll from their respective queues and process jobs independently of each other. So far, exactly what I want.What I want to do next is constrain the max number of active jobs being processed across all the queues/workers. For example, if I want to process jobs from 20 different queues I could have a situation where jobs exist in each queue and my single
PgBoss
instance will then run 20 different handlers concurrently, potentially overwhelming the host resources and all the issues that come with that.I'd love to have the simplicity of one instance on one machine responsible for multiple queues, but able to throttle across queues so that only X number of different jobs can run concurrently.
Ideal setup:
PgBoss
with a max global concurrency of 3PgBoss
instance polls the DB and sees these 5 jobsIs there an existing config option to achieve this? Hopefully this is clear, but if not let me know!