IBM / ibm-spectrum-scale-csi

The IBM Spectrum Scale Container Storage Interface (CSI) project enables container orchestrators, such as Kubernetes and OpenShift, to manage the life-cycle of persistent storage.
Apache License 2.0
66 stars 49 forks source link

CSI gpfs mount check is not there for pod #1070

Closed saurabhwani5 closed 8 months ago

saurabhwani5 commented 10 months ago

Describe the bug

When there is bind mount, CSI should check gpfs mount is there or not before mounting and Should give error message if gpfs mount is not there. This issue is currently on CRI-o 1.28.1 with k8s 1.28 configuration

How to Reproduce?

  1. Install CSI 2.10.0 on k8s 1.28 with CRI-O 1.28.1
    [root@rhle79-master ~]# oc get nodes -owide
    NAME                           STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                      KERNEL-VERSION                 CONTAINER-RUNTIME
    rhle79-master.fyre.ibm.com     Ready    control-plane   21h   v1.28.4   10.11.42.157   <none>        Red Hat Enterprise Linux Server 7.9 (Maipo)   3.10.0-1160.105.1.el7.x86_64   cri-o://1.28.1
    rhle79-worker-1.fyre.ibm.com   Ready    <none>          21h   v1.28.4   10.11.43.160   <none>        Red Hat Enterprise Linux Server 7.9 (Maipo)   3.10.0-1160.105.1.el7.x86_64   cri-o://1.28.1
    rhle79-worker-2.fyre.ibm.com   Ready    <none>          21h   v1.28.4   10.11.44.239   <none>        Red Hat Enterprise Linux Server 7.9 (Maipo)   3.10.0-1160.105.1.el7.x86_64   cri-o://1.28.1
    [root@rhle79-master ~]#
    [root@rhle79-master ~]# oc get pods
    NAME                                                  READY   STATUS    RESTARTS   AGE
    ibm-spectrum-scale-csi-6m8nj                          3/3     Running   0          20h
    ibm-spectrum-scale-csi-attacher-67ffb9c79d-kl79q      1/1     Running   0          20h
    ibm-spectrum-scale-csi-attacher-67ffb9c79d-l4fs5      1/1     Running   0          20h
    ibm-spectrum-scale-csi-bfmgq                          3/3     Running   0          20h
    ibm-spectrum-scale-csi-operator-848b5dfc7-fndxh       1/1     Running   0          20h
    ibm-spectrum-scale-csi-provisioner-7fddb5dccb-sc74d   1/1     Running   0          20h
    ibm-spectrum-scale-csi-resizer-8b5855c6b-pptw2        1/1     Running   0          20h
    ibm-spectrum-scale-csi-snapshotter-567b79585-mctst    1/1     Running   0          20h
    [root@rhle79-master ~]# oc get cso
    NAME                     VERSION   SUCCESS
    ibm-spectrum-scale-csi   2.10.0    True
    [root@rhle79-master ~]# oc describe pod | grep quay
    Image:         quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:57b4ee494ca48342d1ffaf22a166286202b0406b88316e4dcbe87212df6ca8f0
    Image:         quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:57b4ee494ca48342d1ffaf22a166286202b0406b88316e4dcbe87212df6ca8f0
    Image:         quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-operator@sha256:e3d2f9fb68b2d7cd1faf84002bb73626da10bed5d81f91945a592d41893e2fda
      CSI_DRIVER_IMAGE:      quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:57b4ee494ca48342d1ffaf22a166286202b0406b88316e4dcbe87212df6ca8f0
  2. Create SC and PVC as following :
    
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: scale-advance-pvc-1
    spec:
    accessModes:
    - ReadWriteMany
    resources:
    requests:
      storage: 1Gi
    storageClassName: ibm-spectrum-scale-csi-advance

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ibm-spectrum-scale-csi-advance provisioner: spectrumscale.csi.ibm.com parameters: volBackendFs: "fs1" version: "2" reclaimPolicy: Delete

3. Create Pod as following :

apiVersion: v1 kind: Pod metadata: name: csi-scale-fsetdemo-pod-2 labels: app: nginx spec: containers:

Expected behavior:

From above driver logs it is seen that NodePublishVolume is successful bit mount is not gpfs, in this cases CSI should give an error message

Logs:

/scale-csi/D.1070 csisnap.tar.gz

saurabhwani5 commented 8 months ago

after crio upgrade to the next crio version issue is not getting recreated and we have already a check added for verification filesystem is there on node or not