vapor / queues

A queue system for Vapor.
MIT License
169 stars 41 forks source link

Queue only runs 0 or 1 task per refresh interval per worker #124

Open danpalmer opened 1 year ago

danpalmer commented 1 year ago

Describe the bug

The queues package is only able to run up to 1 task per refresh interval, per worker. For example, a system configured with 1 worker and a 10 second refresh interval, running tasks that take 1 second, cannot be utilised beyond 10%, and cannot run more than 1 task every 10 seconds.

To Reproduce

In a project with queues and a task configured...

  1. Set the number of workers to 1
  2. Set the refresh interval to 10 seconds
  3. Enqueue 10 tasks.

Expected behaviour All 10 tasks complete one after the other with effectively zero gap between them.

Actual behaviour The first task completes, and the queue waits until 10 seconds have elapsed before running the next.

Environment

queues 1.12.1 vapor-queues-fluent-driver 3.0.0-beta1

Additional context

I'm not sure whether this is worth filing on the fluent driver repo as well. I think there are two possible scenarios here:

  1. queues intends that the pop method in the driver API blocks until a job is available, likely intended for use with BRPOPLPUSH/etc on Redis. Therefore making this a "bug" in vapor-queues-fluent-driver or the queues documentation.
  2. (and/or) This is unintentional behaviour in queues, and it should instead re-pop if any job is run until there is nothing.

While the intention of refreshInterval isn't documented (that I can tell), I think it's most reasonable and would be most expected by those who have used other queueing systems, that it be a period to wait when there is no available work so the tuning of this value only impacts the polling of storage and does not impact the utilisation of queue workers.

danpalmer commented 1 year ago

@0xTim hey, what do you think of this? Is this a bug in the queues package, or a bug in the fluent driver due to a mismatch in expectations. Or, am I just using it wrong!

danpalmer commented 4 months ago

@0xTim @gwynne hey just a quick ping as it has been a while. I've looked at this in more detail and I'm fairly sure that this is a major performance issue with this package, at least for the Fluent driver, and fixing it looks like it might require significant work in the main Queues package.

I could be wrong, and would love to be corrected here as it would save a lot of work!

danpalmer commented 4 months ago

First pass fix in #134.

0xTim commented 3 months ago

@danpalmer sorry for the delay in this! Got buried in GH notifications. I've had a quick look at the PR and it makes sense, but well need to update it to avoid breaking the API