Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST
What happened:
At the moment, cluster controller checks the connectivity every regular interval(configurable from config map) and validates the connectivity between manager and target cluster and updates the status.
This provides us the opportunity to add some more monitoring based on the configuration. For ex: If administrator wants to monitor specific set of pods in a target cluster in a known namespace.
May be something like:
Monitor:
name: kube-system-ns-monitor
type: Pod
min: 3
maxAllowedFailureIntervals: 3
result: warning/error
name: some-system-deploymennt-monitor
type: Deployment
name: manager-server
maxAllowedFailureIntervals: 3
result: warning/error
What you expected to happen:
Based on the monitoring, cluster state should change from Ready/Warning/Error
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Still need to think about it and see whether its feasible and worth it
Environment:
Is this a BUG REPORT or FEATURE REQUEST?: FEATURE REQUEST
What happened: At the moment, cluster controller checks the connectivity every regular interval(configurable from config map) and validates the connectivity between manager and target cluster and updates the status.
This provides us the opportunity to add some more monitoring based on the configuration. For ex: If administrator wants to monitor specific set of pods in a target cluster in a known namespace.
May be something like: Monitor:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?: Still need to think about it and see whether its feasible and worth it Environment:
Other debugging information (if applicable):
$ kubectl logs