Closed TBeijen closed 1 year ago
Deleting the specific efs-csi-node pod, then after it got replaced killing the statefulset pod that tried to mount the PVC, seems to have 'fixed' things.
Not knowing the ins and outs of hwo the efs-csi-node interact, it kinda looks like the pod got into a bad state, losing it's ability (or stored state) that let's it determine the region.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/kind bug
What happened?
A PVC provisioned by aws-efs-csi-driver, previously mounting fine, fails to mount after pod restart due to an error seemingly originating from efs-utils:
What you expected to happen?
PVC mounting fine
How to reproduce it (as minimally and precisely as possible)?
Not (yet) clear since all other PVCs using same storageclass, on same and identical clusters so far run fine. Error seems specific for the volume since on replacing pod it persists (PVC used by single statefulset pod).
Anything else we need to know?:
Environment
kubectl version
): v1.22.9-eks-a64ea69Recently upgraded efs-csi-driver from 1.3.5 to 1.4.0. Affected PVC has been created by 1.4.0 but so have others that work fine.
Errors
Storageclass: