taskforcesh / bullmq

BullMQ - Message Queue and Batch processing for NodeJS and Python based on Redis
https://bullmq.io
MIT License
6.16k stars 405 forks source link

Rate limiting on queues does not seem to be working #139

Open syearsley opened 4 years ago

syearsley commented 4 years ago

Hi folks, loving this project so far, it offers everything I need! Good work!

I am however experiencing an issue with setting the rate limit for a queue.

I need to set the rate limit for a queue to ensure that regardless of the number of workers, the through-put is controlled - we are using BullMQ as a remote job invocator and want to ensure that we don't overload our customer's services with request spikes.

Unfortunately setting the property seems to have no effect.

// An extreme rate limiting example of 1 item each 10 secs
const workQueue = new Queue("work:client");
workQueue.limiter = { duration: 10000, max:1 };

Am I doing something wrong or is it not yet fully supported? I did notice that the QueueOptions does not have the limiter setting (unlike Bull).

Many thanks,

Scott

jesseokeya commented 4 years ago

Having this issue also

jesseokeya commented 4 years ago

Just figured it out. you need to add a queueSchduler to your queue

jesseokeya commented 4 years ago

reference -> https://github.com/taskforcesh/bullmq/issues/68

syearsley commented 4 years ago

@jesseokeya I cannot get this to work for me at all. Here is a clearer example of what I am testing:

const queueName = "limit:work";

const worker = new Worker(queueName, async (job) => {
  console.log(job.data.name);
});

const scheduler = new QueueScheduler(queueName);

const queue = new Queue(queueName);
queue.limiter = {
  duration: 10000, 
  max: 3
};

for (let index = 0; index < 100; index++) {
  const jobName = "Job: " + index;
  queue.add(jobName, { name: jobName })
}

I am expecting that 3 items would be processed by the worker every 10 seconds, but I am seeing all of them being processed immediately.

Am I doing something wrong?

manast commented 4 years ago

it should process at most 3 jobs every 10 seconds, so if you add a fourth job it will not process it until 10 seconds have passed.

syearsley commented 4 years ago

Thanks @manast, but that is not what I am seeing when running the code example I added - it processes all 100 jobs immediately.

manast commented 4 years ago

oh sorry, I see now that you are not passing the options correctly, try this:

const queue = new Queue(queueName, { limiter: {
  duration: 10000, 
  max: 3
}});
syearsley commented 4 years ago

Thanks for your response @manast, unfortunately that doesn't compile as the QueueOptions class does not have a 'limiter' property - this is why I was setting it directly on the Queue limiter property.

I have looked through the code base and can only see logic and tests covering Worker rate limiting. As I said in my email @manast , this is a key feature that I need for my project.

manast commented 4 years ago

sorry, I missed that the option is not for the Queue class, but for the Worker:

    const worker = new Worker(queueName, async job => {}, {
      limiter: {
        max: 3,
        duration: 10000,
      },
    });
syearsley commented 4 years ago

Ah OK thanks, so I take it then that rate limiting on queues is not yet available in BullMQ as it is in Bull.

That is a shame as I'll need to implement my own solution as this is a key function that I need.

Do you think it is something that will be implemented?

Thanks again.

manast commented 4 years ago

In Bull 3 it is also only implemented on the worker. The worker is the one that takes into consideration the rate limiting. Queue is just the class that adds jobs to the queue... In bull 3 there is only one class "Queue" that can act both as client and as a worker, but still the rate limiter is done by the worker.

syearsley commented 4 years ago

@manast Ahh, I finally understand! I have managed to achieve what I require by ensuring that each Worker instance has the same limiter settings (in my example 3 items every 10 secs as above) and then no matter how many Worker instances I have, they only process the specified number of items between them. Perfect.

I would say though that from an API user experience point of view I have found this very confusing and I would suggest maybe this needs a little bit of a review? I can see how the design has evolved from Bull 3 where now the Queue and Worker have now split, which I agree is a totally sensible idea, but I feel the concept being applied to a Worker is now confusing and perhaps not an obvious upgrade for existing users?

Bull 3's docs state:

It is possible to create queues that limit the number of jobs processed in a unit of time. The limiter is defined per queue, independently of the number of workers, so you can scale horizontally and still limiting the rate of processing easily:

... but now the rate limiting is defined on a Worker

From my perspective it feels more natural that the setting should still be applied to the Queue instance and ideally internally understood by any registered Worker.

I hope my feedback is of some value and I sincerely appreciate your help.

Many thanks,

Scott