The IBM Spectrum Scale Container Storage Interface (CSI) project enables container orchestrators, such as Kubernetes and OpenShift, to manage the life-cycle of persistent storage.
When there is bind mount, CSI should check gpfs mount is there or not before mounting and Should give error message if gpfs mount is not there. This issue is currently on CRI-o 1.28.1 with k8s 1.28 configuration
How to Reproduce?
Install CSI 2.10.0 on k8s 1.28 with CRI-O 1.28.1
[root@rhle79-master ~]# oc get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rhle79-master.fyre.ibm.com Ready control-plane 21h v1.28.4 10.11.42.157 <none> Red Hat Enterprise Linux Server 7.9 (Maipo) 3.10.0-1160.105.1.el7.x86_64 cri-o://1.28.1
rhle79-worker-1.fyre.ibm.com Ready <none> 21h v1.28.4 10.11.43.160 <none> Red Hat Enterprise Linux Server 7.9 (Maipo) 3.10.0-1160.105.1.el7.x86_64 cri-o://1.28.1
rhle79-worker-2.fyre.ibm.com Ready <none> 21h v1.28.4 10.11.44.239 <none> Red Hat Enterprise Linux Server 7.9 (Maipo) 3.10.0-1160.105.1.el7.x86_64 cri-o://1.28.1
[root@rhle79-master ~]#
[root@rhle79-master ~]# oc get pods
NAME READY STATUS RESTARTS AGE
ibm-spectrum-scale-csi-6m8nj 3/3 Running 0 20h
ibm-spectrum-scale-csi-attacher-67ffb9c79d-kl79q 1/1 Running 0 20h
ibm-spectrum-scale-csi-attacher-67ffb9c79d-l4fs5 1/1 Running 0 20h
ibm-spectrum-scale-csi-bfmgq 3/3 Running 0 20h
ibm-spectrum-scale-csi-operator-848b5dfc7-fndxh 1/1 Running 0 20h
ibm-spectrum-scale-csi-provisioner-7fddb5dccb-sc74d 1/1 Running 0 20h
ibm-spectrum-scale-csi-resizer-8b5855c6b-pptw2 1/1 Running 0 20h
ibm-spectrum-scale-csi-snapshotter-567b79585-mctst 1/1 Running 0 20h
[root@rhle79-master ~]# oc get cso
NAME VERSION SUCCESS
ibm-spectrum-scale-csi 2.10.0 True
[root@rhle79-master ~]# oc describe pod | grep quay
Image: quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:57b4ee494ca48342d1ffaf22a166286202b0406b88316e4dcbe87212df6ca8f0
Image: quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:57b4ee494ca48342d1ffaf22a166286202b0406b88316e4dcbe87212df6ca8f0
Image: quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-operator@sha256:e3d2f9fb68b2d7cd1faf84002bb73626da10bed5d81f91945a592d41893e2fda
CSI_DRIVER_IMAGE: quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:57b4ee494ca48342d1ffaf22a166286202b0406b88316e4dcbe87212df6ca8f0
Systems must only be used for conducting IBMs business. *
IBM may exercise rights to manage and enforce security, monitor use, *
remove access or block traffic to and from this system, as well as *
any other rights listed in ITSS. *
*
Users must comply with DevIT service terms of use, IBM policies, *
directives and corporate instructions including, import/export of data, *
BCGs, Corporate Instructions, Standards, Addenda as well as all other *
responsibilities listed in ITSS *
Last login: Thu Dec 7 04:00:00 2023 from 10.11.42.157
[root@rhle79-worker-2 ~]# mount | grep pod
tmpfs on /var/lib/kubelet/pods/016f204e-324a-41df-99ba-050193f3f86e/volumes/kubernetes.io~projected/kube-api-access-mkgx4 type tmpfs (rw,relatime,size=7905996k)
tmpfs on /var/lib/kubelet/pods/a98e57e3-f420-4546-b789-abb21bddcf9c/volumes/kubernetes.io~projected/kube-api-access-mr6q7 type tmpfs (rw,relatime,size=7905996k)
tmpfs on /var/lib/kubelet/pods/f958461e-6a4b-410a-b0d1-08062c27df6e/volumes/kubernetes.io~projected/kube-api-access-8lkdw type tmpfs (rw,relatime,size=174080k)
tmpfs on /var/lib/kubelet/pods/a84b78a4-e763-4651-8bda-4259e86ceebe/volumes/kubernetes.io~projected/kube-api-access-n9xkw type tmpfs (rw,relatime,size=7905996k)
tmpfs on /var/lib/kubelet/pods/d5ad4f93-42b8-4c19-a8f8-61b7a905afd1/volumes/kubernetes.io~projected/kube-api-access-nl6wl type tmpfs (rw,relatime,size=7905996k)
tmpfs on /var/lib/kubelet/pods/a63c3893-39a3-4af9-9e8b-1f9b92cb7801/volumes/kubernetes.io~secret/10232879916520669572-secret type tmpfs (rw,relatime,size=2252800k)
tmpfs on /var/lib/kubelet/pods/a63c3893-39a3-4af9-9e8b-1f9b92cb7801/volumes/kubernetes.io~projected/kube-api-access-jg4l8 type tmpfs (rw,relatime,size=2252800k)
tmpfs on /var/lib/kubelet/pods/574e6a43-15de-4cec-aae8-bcf54bf29610/volumes/kubernetes.io~projected/kube-api-access-vzzg2 type tmpfs (rw,relatime,size=819200k)
tmpfs on /var/lib/kubelet/pods/f81ab19b-5fb1-48bd-bad7-b901be2439cb/volumes/kubernetes.io~projected/kube-api-access-qh5t8 type tmpfs (rw,relatime,size=7905996k)
tmpfs on /var/lib/kubelet/pods/8b12ef14-54ce-4a86-bd72-fe472d41f75c/volumes/kubernetes.io~projected/kube-api-access-8gvfd type tmpfs (rw,relatime,size=7905996k)
[root@rhle79-worker-2 ~]# mount | grep gpfs
fs2 on /ibm/fs2 type gpfs (rw,relatime)
fs1 on /ibm/fs1 type gpfs (rw,relatime)
after crio upgrade to the next crio version issue is not getting recreated and we have already a check added for verification filesystem is there on node or not
Describe the bug
When there is bind mount, CSI should check gpfs mount is there or not before mounting and Should give error message if gpfs mount is not there. This issue is currently on CRI-o 1.28.1 with k8s 1.28 configuration
How to Reproduce?
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ibm-spectrum-scale-csi-advance provisioner: spectrumscale.csi.ibm.com parameters: volBackendFs: "fs1" version: "2" reclaimPolicy: Delete
apiVersion: v1 kind: Pod metadata: name: csi-scale-fsetdemo-pod-2 labels: app: nginx spec: containers:
[root@rhle79-master ~]# oc exec -it csi-scale-fsetdemo-pod-2 -- mount | grep /usr/share/nginx/html/scale /dev/mapper/rhel-root on /usr/share/nginx/html/scale type xfs (rw,relatime,attr2,inode64,noquota) [root@rhle79-master ~]# oc get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES csi-scale-fsetdemo-pod-2 1/1 Running 0 17m 10.244.150.135 rhle79-worker-2.fyre.ibm.com
ibm-spectrum-scale-csi-6m8nj 3/3 Running 0 21h 10.11.43.160 rhle79-worker-1.fyre.ibm.com
ibm-spectrum-scale-csi-attacher-67ffb9c79d-kl79q 1/1 Running 0 21h 10.244.175.134 rhle79-worker-1.fyre.ibm.com
ibm-spectrum-scale-csi-attacher-67ffb9c79d-l4fs5 1/1 Running 0 21h 10.244.150.132 rhle79-worker-2.fyre.ibm.com
ibm-spectrum-scale-csi-bfmgq 3/3 Running 0 21h 10.11.44.239 rhle79-worker-2.fyre.ibm.com
ibm-spectrum-scale-csi-operator-848b5dfc7-fndxh 1/1 Running 0 21h 10.244.175.131 rhle79-worker-1.fyre.ibm.com
ibm-spectrum-scale-csi-provisioner-7fddb5dccb-sc74d 1/1 Running 0 21h 10.244.175.132 rhle79-worker-1.fyre.ibm.com
ibm-spectrum-scale-csi-resizer-8b5855c6b-pptw2 1/1 Running 0 21h 10.244.175.133 rhle79-worker-1.fyre.ibm.com
ibm-spectrum-scale-csi-snapshotter-567b79585-mctst 1/1 Running 0 21h 10.244.175.135 rhle79-worker-1.fyre.ibm.com
[root@rhle79-master ~]# ssh rhle79-worker-2.fyre.ibm.com
Last login: Thu Dec 7 04:00:00 2023 from 10.11.42.157 [root@rhle79-worker-2 ~]# mount | grep pod tmpfs on /var/lib/kubelet/pods/016f204e-324a-41df-99ba-050193f3f86e/volumes/kubernetes.io~projected/kube-api-access-mkgx4 type tmpfs (rw,relatime,size=7905996k) tmpfs on /var/lib/kubelet/pods/a98e57e3-f420-4546-b789-abb21bddcf9c/volumes/kubernetes.io~projected/kube-api-access-mr6q7 type tmpfs (rw,relatime,size=7905996k) tmpfs on /var/lib/kubelet/pods/f958461e-6a4b-410a-b0d1-08062c27df6e/volumes/kubernetes.io~projected/kube-api-access-8lkdw type tmpfs (rw,relatime,size=174080k) tmpfs on /var/lib/kubelet/pods/a84b78a4-e763-4651-8bda-4259e86ceebe/volumes/kubernetes.io~projected/kube-api-access-n9xkw type tmpfs (rw,relatime,size=7905996k) tmpfs on /var/lib/kubelet/pods/d5ad4f93-42b8-4c19-a8f8-61b7a905afd1/volumes/kubernetes.io~projected/kube-api-access-nl6wl type tmpfs (rw,relatime,size=7905996k) tmpfs on /var/lib/kubelet/pods/a63c3893-39a3-4af9-9e8b-1f9b92cb7801/volumes/kubernetes.io~secret/10232879916520669572-secret type tmpfs (rw,relatime,size=2252800k) tmpfs on /var/lib/kubelet/pods/a63c3893-39a3-4af9-9e8b-1f9b92cb7801/volumes/kubernetes.io~projected/kube-api-access-jg4l8 type tmpfs (rw,relatime,size=2252800k) tmpfs on /var/lib/kubelet/pods/574e6a43-15de-4cec-aae8-bcf54bf29610/volumes/kubernetes.io~projected/kube-api-access-vzzg2 type tmpfs (rw,relatime,size=819200k) tmpfs on /var/lib/kubelet/pods/f81ab19b-5fb1-48bd-bad7-b901be2439cb/volumes/kubernetes.io~projected/kube-api-access-qh5t8 type tmpfs (rw,relatime,size=7905996k) tmpfs on /var/lib/kubelet/pods/8b12ef14-54ce-4a86-bd72-fe472d41f75c/volumes/kubernetes.io~projected/kube-api-access-8gvfd type tmpfs (rw,relatime,size=7905996k) [root@rhle79-worker-2 ~]# mount | grep gpfs fs2 on /ibm/fs2 type gpfs (rw,relatime) fs1 on /ibm/fs1 type gpfs (rw,relatime)
[root@rhle79-master ~]# oc logs ibm-spectrum-scale-csi-bfmgq | grep pvc-4fa0e250-fedd-4d22-adaa-ad29a7ab4965 I1208 07:35:31.351277 1 nodeserver.go:142] [cc7e35a8-d66c-4948-b2e7-adee5ec66d75] NodePublishVolume - request: &csi.NodePublishVolumeRequest{VolumeId:"1;1;10232879916520669572;A02B0B0A:6571824D;68ac434a-896c-4af0-864a-59acafbef856-ibm-spectrum-scale-csi-driver;pvc-4fa0e250-fedd-4d22-adaa-ad29a7ab4965;/ibm/fs1/68ac434a-896c-4af0-864a-59acafbef856-ibm-spectrum-scale-csi-driver/pvc-4fa0e250-fedd-4d22-adaa-ad29a7ab4965", PublishContext:map[string]string(nil), StagingTargetPath:"", TargetPath:"/var/lib/kubelet/pods/8b12ef14-54ce-4a86-bd72-fe472d41f75c/volumes/kubernetes.io~csi/pvc-4fa0e250-fedd-4d22-adaa-ad29a7ab4965/mount", VolumeCapability:(*csi.VolumeCapability)(0xc00043a8c0), Readonly:false, Secrets:map[string]string(nil), VolumeContext:map[string]string{"csi.storage.k8s.io/ephemeral":"false", "csi.storage.k8s.io/pod.name":"csi-scale-fsetdemo-pod-2", "csi.storage.k8s.io/pod.namespace":"ibm-spectrum-scale-csi-driver", "csi.storage.k8s.io/pod.uid":"8b12ef14-54ce-4a86-bd72-fe472d41f75c", "csi.storage.k8s.io/pv/name":"pvc-4fa0e250-fedd-4d22-adaa-ad29a7ab4965", "csi.storage.k8s.io/pvc/name":"scale-advance-pvc-1", "csi.storage.k8s.io/pvc/namespace":"ibm-spectrum-scale-csi-driver", "csi.storage.k8s.io/serviceAccount.name":"default", "storage.kubernetes.io/csiProvisionerIdentity":"1701945461958-9378-spectrumscale.csi.ibm.com", "version":"2", "volBackendFs":"fs1"}, XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0} I1208 07:35:31.385297 1 nodeserver.go:255] [cc7e35a8-d66c-4948-b2e7-adee5ec66d75] NodePublishVolume - the target directory [/var/lib/kubelet/pods/8b12ef14-54ce-4a86-bd72-fe472d41f75c/volumes/kubernetes.io~csi/pvc-4fa0e250-fedd-4d22-adaa-ad29a7ab4965/mount] is created successfully I1208 07:35:31.421762 1 nodeserver.go:286] [cc7e35a8-d66c-4948-b2e7-adee5ec66d75] NodePublishVolume - successfully mounted [/var/lib/kubelet/pods/8b12ef14-54ce-4a86-bd72-fe472d41f75c/volumes/kubernetes.io~csi/pvc-4fa0e250-fedd-4d22-adaa-ad29a7ab4965/mount] using BINDMOUNT
Expected behavior:
From above driver logs it is seen that NodePublishVolume is successful bit mount is not gpfs, in this cases CSI should give an error message
Logs:
/scale-csi/D.1070 csisnap.tar.gz