Open better-sachin opened 3 years ago
Try adding an init container that sleeps for a while and see if that solves the problem. If the pod starts up too quickly it may cause problems with the role assumption, however I believe this particular race was fixed a while ago.
If it is not that then see if you can assume the role on a node without kube2iam as a proxy. If not then the problem lies in the role and trust configurations.
looks like this might be a kube2iam issue with starting up too many pods up at once https://github.com/jtblin/kube2iam/issues/136
We are running into an intermittent issue on our Kubernetes pods using kube2iam to provide IAM credentials to containers where the assumed role tries to assume itself
The first thing our pod does is decrypt SOPS secrets using SOPS
We are getting this error message while decrypting:
We have enabled kube2iam verbose logs and see these logs related to the pod having the error:
Also we're seeing this a lot in our kube2iam pods:
Is this log line expected?
— Is this a race condition between kube2iam and SOPS where SOPS tries to assume a role before kube2iam has fully assumed it? — Is there a way to set the trust relationship of the role to be able to assume itself?