minvws / nl-kat-coordination

OpenKAT scans networks, finds vulnerabilities and creates accessible reports. It integrates the most widely used network tools and scanning software into a modular framework, accesses external databases such as shodan, and combines the information from all these sources into clear reports. It also includes lots of cat hair.
https://openkat.nl
European Union Public License 1.2
127 stars 58 forks source link

Investigate scheduler queue saturation #3765

Open jpbruinsslot opened 3 weeks ago

jpbruinsslot commented 3 weeks ago

Possible sources:

  1. Long start-up times because of bootstrapping organisations and creating plugin caches for those organisations resulting in potentially long wait times
  2. Flushing of plugin caches does this for all organizations resulting in potentially long wait times
  3. Referencing plugin caches of organizations that don't have plugins might result in strange behaviour

Local investigation:

underdarknl commented 3 weeks ago

Long locks (for example how the plugin cache refresh was implemented before pr #3752 would also probably stall the threads that create jobs.

Another long running task might be the endpoint that dispenses all the queues to the runners. do we really need that list in there to just pop a job (any job) from the queue? Can we not periodically refresh the list of organizations from an endpoint that returns just that (and as such is fast), instead of gathering the queues? (we know a boefje queue should exists, and if not, we could retry a few times if the scheduler can't find it, and remove that orga from the list in the job runners)

underdarknl commented 3 weeks ago

It does not yet explain the queue saturation. Local testing of creating 300+ organisations did not result in long wait times.

It looks like the boefje runner would try to ingest the list of queues for all 300 organisations, and would timeout on receiving this list (as the scheduler itself was probably also still very busy fetching all katalogi) resulting in no queue's being present in the runner and as such it would stop fetching jobs.

jpbruinsslot commented 3 weeks ago

Long locks (for example how the plugin cache refresh was implemented before pr #3752 would also probably stall the threads that create jobs.

Exactly. Which could indicate that the queue was already full and a restart happened and would take substantial time to bootstrap all the caches

Another long running task might be the endpoint that dispenses all the queues to the runners. do we really need that list in there to just pop a job (any job) from the queue? Can we not periodically refresh the list of organizations from an endpoint that returns just that (and as such is fast), instead of gathering the queues? (we know a boefje queue should exists, and if not, we could retry a few times if the scheduler can't find it, and remove that orga from the list in the job runners)

We can optimize the endpoint to relay the available queues that a runner can pop off jobs from. This issue addresses this particular issue: https://github.com/minvws/nl-kat-coordination/issues/3358 Filtering parameters can also be added to give the task runner a narrower view on whats available.

underdarknl commented 3 weeks ago

Long locks (for example how the plugin cache refresh was implemented before pr #3752 would also probably stall the threads that create jobs.

Exactly. Which could indicate that the queue was already full and a restart happened and would take substantial time to bootstrap all the caches

It could but if the second problem existed it would mean Job-popping would stop, and the queue's would fill up regardless of the katalogus locks being slow/or broad. Refreshing the katalogus caches continuously, because it takes longer for them to fill just adds a lot of load to the system, but does not stop functionality as far as I can see (since the code stops the cache timers while updating). there would at least be a small (default 30s) window of valid plugin caches.