Closed jkomara closed 6 years ago
This issue just popped up for me as well. Not sure what caused it. All pods are running correctly with plenty of resources. I'm disabling the monitor for now.
fixed by #42
released: https://rubygems.org/gems/sensu-plugins-kubernetes/versions/3.0.1
Testing scenario
A master with a single k8s node that has 3 pods deployed. All pods are in "Running" state and check-kube-pods-running.rb passes.
Delete a pod and then immediately run check-kube-pods-running.rb. Receive the following error:
Cause
This was because in my local environment I did not have enough resource to schedule the new pod until the old pod was deleted. The check is looking to see if pod.status.conditions[1].stats == 'False'. Since it was waiting to be scheduled pod.status.conditions[1] did not exist and threw this error.
Expected result
A critical that declares the pod is not scheduled because of insufficient resources. I feel like in the real world, if I had a pod that could not be schedule due to lack of resources until the existing pod was terminated, I would want to know about it.
If you need any more information please let me know.
Thank you.