Open jpbruinsslot opened 3 weeks ago
Long locks (for example how the plugin cache refresh was implemented before pr #3752 would also probably stall the threads that create jobs.
Another long running task might be the endpoint that dispenses all the queues to the runners. do we really need that list in there to just pop a job (any job) from the queue? Can we not periodically refresh the list of organizations from an endpoint that returns just that (and as such is fast), instead of gathering the queues? (we know a boefje queue should exists, and if not, we could retry a few times if the scheduler can't find it, and remove that orga from the list in the job runners)
It does not yet explain the queue saturation. Local testing of creating 300+ organisations did not result in long wait times.
It looks like the boefje runner would try to ingest the list of queues for all 300 organisations, and would timeout on receiving this list (as the scheduler itself was probably also still very busy fetching all katalogi) resulting in no queue's being present in the runner and as such it would stop fetching jobs.
Long locks (for example how the plugin cache refresh was implemented before pr #3752 would also probably stall the threads that create jobs.
Exactly. Which could indicate that the queue was already full and a restart happened and would take substantial time to bootstrap all the caches
Another long running task might be the endpoint that dispenses all the queues to the runners. do we really need that list in there to just pop a job (any job) from the queue? Can we not periodically refresh the list of organizations from an endpoint that returns just that (and as such is fast), instead of gathering the queues? (we know a boefje queue should exists, and if not, we could retry a few times if the scheduler can't find it, and remove that orga from the list in the job runners)
We can optimize the endpoint to relay the available queues that a runner can pop off jobs from. This issue addresses this particular issue: https://github.com/minvws/nl-kat-coordination/issues/3358 Filtering parameters can also be added to give the task runner a narrower view on whats available.
Long locks (for example how the plugin cache refresh was implemented before pr #3752 would also probably stall the threads that create jobs.
Exactly. Which could indicate that the queue was already full and a restart happened and would take substantial time to bootstrap all the caches
It could but if the second problem existed it would mean Job-popping would stop, and the queue's would fill up regardless of the katalogus locks being slow/or broad. Refreshing the katalogus caches continuously, because it takes longer for them to fill just adds a lot of load to the system, but does not stop functionality as far as I can see (since the code stops the cache timers while updating). there would at least be a small (default 30s) window of valid plugin caches.
Possible sources:
Local investigation:
For point 1 it could be possible that with many organisations can lead to long wait times starting up the application which would lead to the scheduler not being responsive and able to handle requests. However, this does not explain why the queue was at full capacity. For this to happen that means the scheduler had to be already been running and pushing items on the queue.
Indeed long wait times for many organisations and creating caches is sub-optimal (see https://github.com/minvws/nl-kat-coordination/issues/3357). It does not yet explain the queue saturation. Local testing of creating 300+ organisations did not result in long wait times.
...