taskforcesh / bullmq-pro-support

Support repository for BullMQ Pro edition.
1 stars 0 forks source link

Group Rate Limiting seems to drain/ignore all posterior jobs #39

Closed SirBernardPhilip closed 1 year ago

SirBernardPhilip commented 1 year ago

I already explained in comments in issue #1 bu I'll repeat it here. I am using the group rate limit functionality to stop processing jobs for a certain group for a bit of time and then continue with all the jobs that were previously queued. Nevertheless, instead of continuing with all the execution of the other jobs it only retries the original job that caused the rate limiting and stops there. For more screen captures if the state Redis ends up in you can refer to my original comment.

I have created a small repository that shows what is happening. I stripped a lot of code from what was the original code so if something looks like a silly way to implement the queues it's because this is an oversimplification of the actual architecture we have.

When running the code, the output is as follows:

QueueBroker created, adding 3 subtasks for user1 Running job with data: { subtaskId: 'subtask1', userPrivateId: 'user1' } Finished job with data: { subtaskId: 'subtask1', userPrivateId: 'user1' } Running job with data: { subtaskId: 'subtask2', userPrivateId: 'user1' } Rate limiting job with data: { subtaskId: 'subtask2', userPrivateId: 'user1' } Running job with data: { subtaskId: 'subtask2', userPrivateId: 'user1' } Finished job with data: { subtaskId: 'subtask2', userPrivateId: 'user1' }

The expected output would be:

QueueBroker created, adding 3 subtasks for user1 Running job with data: { subtaskId: 'subtask1', userPrivateId: 'user1' } Finished job with data: { subtaskId: 'subtask1', userPrivateId: 'user1' } Running job with data: { subtaskId: 'subtask2', userPrivateId: 'user1' } Rate limiting job with data: { subtaskId: 'subtask2', userPrivateId: 'user1' } Running job with data: { subtaskId: 'subtask2', userPrivateId: 'user1' } Finished job with data: { subtaskId: 'subtask2', userPrivateId: 'user1' } Running job with data: { subtaskId: 'subtask3', userPrivateId: 'user1' } Finished job with data: { subtaskId: 'subtask3', userPrivateId: 'user1' }

manast commented 1 year ago

Thanks for submitting this issue and sorry for the delay. I have examined the code, and I wonder if you could possibly create a case without any external dependencies? it is difficult to debug the case with the extra code and it is not unthinkable the issue is caused by one of the external dependencies.

SirBernardPhilip commented 1 year ago

I have updated the repository to use the singleton pattern through static classes instead of dependency injection. The only other dependency in the project is now ioredis but solely to prune the databases after running the examples so it should not affect the execution of the main program. With the updated code the issue still presists.

manast commented 1 year ago

Thanks for the updated repo. I am able to reproduce the issue now, I will keep you updated when I know more.

manast commented 1 year ago

The issue is fixed, we are just running some tests and a release will be done later today. The problem was that when using the manual group rate limit, the count of jobs in active status was not updated correctly, so the group stayed in maxed status for ever.

SirBernardPhilip commented 1 year ago

Perfect! Thank you very much

manast commented 1 year ago

The fix is not live yet, we had some small issue with the release, working on it now.

manast commented 1 year ago

The fix is on v5.1.13, please upgrade to get it.