jtblin / kube2iam

kube2iam provides different AWS IAM roles for pods running on Kubernetes
BSD 3-Clause "New" or "Revised" License
1.96k stars 318 forks source link

any catastrophic reaction if we enable liveness probe? #333

Closed vickeyrihal1 closed 2 years ago

vickeyrihal1 commented 2 years ago

We use kube2iam in our clusters and the clusters host a number of critical services. Recently we noticed that kube2iam container stops listening on 8181 port randomly and leading to issues at client applications end.

We are now planning to enable liveness probe for kube2iam ds.

Could you share your thoughts on liveness probe in kube2iam. Is there any scenario where the liveness probe could lead to catastrophic reaction at cluster level. For instance, at some condition, 8181/healthz stops working and leads kube2iam container recycling, even though that wont have impacted client application iam auth?

Here is the snippet we are going to apply:

livenessProbe: failureThreshold: 30 httpGet: path: /healthz port: 8181 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 5

Also, do we have a monitor in place to capture 8181:/healthz failures, in case we don't want to go for liveness probe?