melezhik / sparky

Sparky is a flexible and minimalist continuous integration server and distribute tasks runner written in Raku.
21 stars 0 forks source link

Worker workload #30

Open thibaultduponchelle opened 3 years ago

thibaultduponchelle commented 3 years ago

Somewhat related to #27 and #28 but I would like a view with a graph to monitor the health of workers. We can play with kubernetes also for this (but it won't always be deployed in k8s...).

melezhik commented 3 years ago

The challenge with that request is that Sparky workers could be dynamic because of FTP. We can't count them just counting all sparky.yml files across all projects.

It's makes a sense with k8s though, where we could assume that workers equal docker pods and pods statistics are usually available in k8s cluster ...

melezhik commented 3 years ago

Also should we count docker containers on localhost (none k8s) setup?

thibaultduponchelle commented 3 years ago

For this issue:

About localhost, I don't know.

melezhik commented 3 years ago

counting jobs for worker is doable ... in sparky database we have a list of builds. for every build we can find a worker it's been executed or being executed, so we can makes a graph out of it ... builds also have state (successful/failed), so we can measure by states as well. Will it work?

thibaultduponchelle commented 3 years ago

Yes !