Open thibaultduponchelle opened 3 years ago
The challenge with that request is that Sparky workers could be dynamic because of FTP. We can't count them just counting all sparky.yml files across all projects.
It's makes a sense with k8s though, where we could assume that workers equal docker pods and pods statistics are usually available in k8s cluster ...
Also should we count docker containers on localhost (none k8s) setup?
For this issue:
About localhost, I don't know.
counting jobs for worker is doable ... in sparky database we have a list of builds. for every build we can find a worker it's been executed or being executed, so we can makes a graph out of it ... builds also have state (successful/failed), so we can measure by states as well. Will it work?
Yes !
Somewhat related to #27 and #28 but I would like a view with a graph to monitor the health of workers. We can play with kubernetes also for this (but it won't always be deployed in k8s...).