We use kube2iam in our clusters and the clusters host a number of critical services. Recently we noticed that kube2iam container stops listening on 8181 port randomly and leading to issues at client applications end.
We are now planning to enable liveness probe for kube2iam ds.
Could you share your thoughts on liveness probe in kube2iam. Is there any scenario where the liveness probe could lead to catastrophic reaction at cluster level. For instance, at some condition, 8181/healthz stops working and leads kube2iam container recycling, even though that wont have impacted client application iam auth?
We use kube2iam in our clusters and the clusters host a number of critical services. Recently we noticed that kube2iam container stops listening on 8181 port randomly and leading to issues at client applications end.
We are now planning to enable liveness probe for kube2iam ds.
Could you share your thoughts on liveness probe in kube2iam. Is there any scenario where the liveness probe could lead to catastrophic reaction at cluster level. For instance, at some condition, 8181/healthz stops working and leads kube2iam container recycling, even though that wont have impacted client application iam auth?
Here is the snippet we are going to apply:
Also, do we have a monitor in place to capture 8181:/healthz failures, in case we don't want to go for liveness probe?