Describe the bug
A clear and concise description of what the bug is.
Somehow i faced an issue with some EKS clusters where it was perfectly working before
kubernetes 1.23.x (3 differnts minor versions)
custom ubunutu 22.04 EKS images
To Reproduce
Steps to reproduce the behavior:
Deploy 0.3.2 on cluster
Pods will be in CrashlloopBack with the message "unable to load in-cluster config"
Expected behavior
A clear and concise description of what you expected to happen.
From what I understand the token file is needed (at least) to create lease at startup
There might be some policies (in 1.23 PSP are still there) that prevent some worload to have one by default
I worked on a fix that is just adding automountServiceAccountToken: true to the pods (PR opened)
Are you a GitHub Sponsor (Yes/No?)
[ ] Yes
[x] No
Screenshots or console output
If applicable, add screenshots to help explain your problem.
Operating system and version:
Linux ubuntu 22.04
List all possible solutions, and your suggested option
Merge the PR ?
Additional context
Add any other context about the problem here.
Describe the bug A clear and concise description of what the bug is.
Somehow i faced an issue with some EKS clusters where it was perfectly working before
To Reproduce Steps to reproduce the behavior:
Expected behavior A clear and concise description of what you expected to happen.
From what I understand the token file is needed (at least) to create lease at startup There might be some policies (in 1.23 PSP are still there) that prevent some worload to have one by default
I worked on a fix that is just adding
automountServiceAccountToken: true
to the pods (PR opened)Are you a GitHub Sponsor (Yes/No?)
Screenshots or console output If applicable, add screenshots to help explain your problem.
Operating system and version: Linux ubuntu 22.04
List all possible solutions, and your suggested option
Merge the PR ?
Additional context Add any other context about the problem here.