Updated helm chart from 2.1.5 to 3.0.8 and failing when dynamically provisioning the volume.
When installing it in a fresh environment(EKS 1.30), dynamic provisioning is working fine with helm chart 3.0.8. However when I am upgrading the helm chart in existing environment(EKS 1.28) where it was working fine before(using helm chart 2.1.5), I started getting the below issue.
/ bug
What happened?
Updated helm chart from 2.1.5 to 3.0.8 and failing when dynamically provisioning the volume. When installing it in a fresh environment(EKS 1.30), dynamic provisioning is working fine with helm chart 3.0.8. However when I am upgrading the helm chart in existing environment(EKS 1.28) where it was working fine before(using helm chart 2.1.5), I started getting the below issue.
Logs efs-csi-controller
E0923 19:37:37.726947 1 driver.go:107] GRPC error: rpc error: code = Internal desc = Failed to fetch Access Points or Describe File System: List Access Points failed: RequestCanceled: request context canceled caused by: context canceled E0923 19:39:33.198011 1 driver.go:107] GRPC error: rpc error: code = Internal desc = Failed to fetch Access Points or Describe File System: List Access Points failed: RequestCanceled: request context canceled caused by: context deadline exceeded
storage class allowVolumeExpansion: false apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "false" name: efs-sc-test mountOptions: [ tls] parameters: directoryPerms: "700" ensureUniqueDirectory: "false" fileSystemId: fs-xxxxxxx gidRangeEnd: "2000" gidRangeStart: "1000" provisioningMode: efs-ap subPathPattern: /${.PVC.name} basePath: "/dynamic_provisioning" provisioner: efs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: Immediate
PVC
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: efs-test-pvc1 spec: accessModes: [ReadWriteMany] resources: requests: storage: 5Gi storageClassName: efs-sc-test volumeName: null EOF Logs
What you expected to happen?
The dynamic provisioning of the volume and creation of access point in the EFS.
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?:
Environment
kubectl version
): EKS 1.28 efs_csi_driver_image_version: "v2.0.7" efs_livenessprobe_image_version: "v2.13.0-eks-1-30-8" efs_nodedriverregistrar_image_version: "v2.11.0-eks-1-30-8" efs_csiprovisioner_image_version: "v5.0.1-eks-1-30-8"Please also attach debug logs to help us better diagnose