Open ashwajce opened 5 days ago
@ashwajce could you provide the kubelet logs on that node in problem? have you set any securityContext in pod? this issue could be related to slow chown operation if you set fsGroup in securityContext in pod, one workaround is set fsGroupChangePolicy: None
in pv
fsGroupChangePolicy: indicates how volume's ownership will be changed by the driver, pod securityContext.fsGroupChangePolicy is ignored
https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md
What happened: In an AKS environment, mounting a large NFS file share (12Ti, 11.45M files) to a pod causes significant delays, with the pod staying in "ContainerCreating" status for about 3 days. This occurs when syncing data via a linux vm on the nfs share, after sync the nfs share does mount in the pv and pvc correctly, pod does not start up due to operation on current pv already exists
What you expected to happen: Pod starts with the volume attached immediately
How to reproduce it:
kubectl apply -f persistent-volumes.yaml --namespace <masked>
Run the helm chart with pod
Actual result: describe pod msg:
Anything else we need to know?: Issue is occurs also after we have synced data (rsync to linux vm in azure, with nfs share mounted) into this NFS share.
Once the PV has been mounted and no operations are ongoing the PV and PVC can be removed. 2nd run the pvc is immediately mounted and available. Even if we switch from cluster its fine.
Environment:
kubectl version
): 1.28.3uname -a
):