Open ashwajce opened 5 months ago
@ashwajce could you provide the kubelet logs on that node in problem? have you set any securityContext in pod? this issue could be related to slow chown operation if you set fsGroup in securityContext in pod, one workaround is set fsGroupChangePolicy: None
in pv
fsGroupChangePolicy: indicates how volume's ownership will be changed by the driver, pod securityContext.fsGroupChangePolicy is ignored
https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
What happened: In an AKS environment, mounting a large NFS file share (12Ti, 11.45M files) to a pod causes significant delays, with the pod staying in "ContainerCreating" status for about 3 days. This occurs when syncing data via a linux vm on the nfs share, after sync the nfs share does mount in the pv and pvc correctly, pod does not start up due to operation on current pv already exists
What you expected to happen: Pod starts with the volume attached immediately
How to reproduce it:
kubectl apply -f persistent-volumes.yaml --namespace <masked>
Run the helm chart with pod
Actual result: describe pod msg:
Anything else we need to know?: Issue is occurs also after we have synced data (rsync to linux vm in azure, with nfs share mounted) into this NFS share.
Once the PV has been mounted and no operations are ongoing the PV and PVC can be removed. 2nd run the pvc is immediately mounted and available. Even if we switch from cluster its fine.
Environment:
kubectl version
): 1.28.3uname -a
):