ceph / ceph-csi

CSI driver for Ceph
Apache License 2.0
1.27k stars 539 forks source link

ReadWriteOnce CephFS PVC is not enforced to be mounted by only one node #4191

Closed adux6991 closed 1 year ago

adux6991 commented 1 year ago

Describe the bug

ReadWriteOnce CephFS PVC is not enforced to be mounted by only one node.

Environment details

Steps to reproduce

  1. Create a ReadWriteOnce PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Gi
  storageClassName: csi-cephfs-sc
  1. Create two Pods mounting the PVC on difference nodes
---
apiVersion: v1
kind: Pod
metadata:
  name: node1
spec:
  nodeSelector:
    kubernetes.io/hostname: node1
  containers:
    - name: ubuntu
      image: ubuntu:22.04
      command:
        - sh
      args:
        - "-c"
        - sleep 3600s
      resources:
        limits:
          cpu: 1
          memory: 1Gi
        requests:
          cpu: 0.5
          memory: 100Mi
      volumeMounts:
        - mountPath: /data
          name: datadir
  volumes:
    - name: datadir
      persistentVolumeClaim:
        claimName: cephfs-pvc
---
apiVersion: v1
kind: Pod
metadata:
  name: node2
spec:
  nodeSelector:
    kubernetes.io/hostname: node2
  containers:
    - name: ubuntu
      image: ubuntu:22.04
      command:
        - sh
      args:
        - "-c"
        - sleep 3600s
      resources:
        limits:
          cpu: 1
          memory: 1Gi
        requests:
          cpu: 0.5
          memory: 100Mi
      volumeMounts:
        - mountPath: /data
          name: datadir
  volumes:
    - name: datadir
      persistentVolumeClaim:
        claimName: cephfs-pvc
  1. Write the PVC on one Pod and check the file on the other Pod, and vice versa
$ kubectl exec -it node1 -- /bin/bash
root@node1:/# echo Hello > /data/hello.txt

$ kubectl exec -it node2 -- /bin/bash
root@node2:/# cat /data/hello.txt
Hello
root@node2:/# echo Bye > /data/bye.txt

$ kubectl exec -it node1 -- /bin/bash
root@node1:/# cat /data/bye.txt
Bye

Actual results

Two Pods on difference nodes could mount the same ReadWriteOnce PVC.

Expected behavior

Two Pods on difference nodes should not be able to mount the same ReadWriteOnce PVC.

Logs

csi-cephfsplugin container log on node1

I1012 06:35:19.425317 3318604 utils.go:195] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b GRPC call: /csi.v1.Node/NodeStageVolume
I1012 06:35:19.425511 3318604 utils.go:206] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/cephfs-hdd.csi.ceph.com/42fde0f64646433dabda48976ee8d2d8d1351d5660c8c169565b5705da8747fa/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["debug"]}},"access_mode":{"mode":7}},"volume_context":{"clusterID":"04a5e96c-cc3a-11ed-978b-f1c93a24551d","fsName":"k8s_hdd","mounter":"kernel","pool":"ecpool-k2-m1-hdd","storage.kubernetes.io/csiProvisionerIdentity":"1690872337553-8081-cephfs-hdd.csi.ceph.com","subvolumeName":"csi-vol-88448034-fdd1-4e8c-9178-7eb9ec3df86b","subvolumePath":"/volumes/csi/csi-vol-88448034-fdd1-4e8c-9178-7eb9ec3df86b/eb063f4c-f20a-41f0-9d83-f24f9ad985fb"},"volume_id":"0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b"}
I1012 06:35:19.433040 3318604 omap.go:88] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b got omap values: (pool="cephfs.k8s_hdd.meta", namespace="csi", name="csi.volume.88448034-fdd1-4e8c-9178-7eb9ec3df86b"): map[csi.imagename:csi-vol-88448034-fdd1-4e8c-9178-7eb9ec3df86b csi.volname:pvc-6ce2517f-9b43-47ab-8775-124049f0d567 csi.volume.owner:xuda]
I1012 06:35:19.455312 3318604 nodeserver.go:293] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b cephfs: mounting volume 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b with Ceph kernel client
I1012 06:35:19.463030 3318604 cephcmds.go:105] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b command succeeded: modprobe [ceph]
I1012 06:35:19.555319 3318604 cephcmds.go:105] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b command succeeded: mount [-t ceph ****:6789,****:6789:/volumes/csi/csi-vol-88448034-fdd1-4e8c-9178-7eb9ec3df86b/eb063f4c-f20a-41f0-9d83-f24f9ad985fb /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs-hdd.csi.ceph.com/42fde0f64646433dabda48976ee8d2d8d1351d5660c8c169565b5705da8747fa/globalmount -o name=k8s_hdd,secretfile=/tmp/csi/keys/keyfile-1129524025,mds_namespace=k8s_hdd,_netdev]
I1012 06:35:19.555387 3318604 nodeserver.go:248] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b cephfs: successfully mounted volume 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b to /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs-hdd.csi.ceph.com/42fde0f64646433dabda48976ee8d2d8d1351d5660c8c169565b5705da8747fa/globalmount
I1012 06:35:19.555444 3318604 utils.go:212] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b GRPC response: {}
I1012 06:35:19.556547 3318604 utils.go:195] ID: 1774197 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1012 06:35:19.556577 3318604 utils.go:206] ID: 1774197 GRPC request: {}
I1012 06:35:19.556699 3318604 utils.go:212] ID: 1774197 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1012 06:35:19.557459 3318604 utils.go:195] ID: 1774198 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1012 06:35:19.557485 3318604 utils.go:206] ID: 1774198 GRPC request: {}
I1012 06:35:19.557573 3318604 utils.go:212] ID: 1774198 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1012 06:35:19.558279 3318604 utils.go:195] ID: 1774199 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1012 06:35:19.558305 3318604 utils.go:206] ID: 1774199 GRPC request: {}
I1012 06:35:19.558395 3318604 utils.go:212] ID: 1774199 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1012 06:35:19.559174 3318604 utils.go:195] ID: 1774200 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b GRPC call: /csi.v1.Node/NodePublishVolume
I1012 06:35:19.559338 3318604 utils.go:206] ID: 1774200 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/cephfs-hdd.csi.ceph.com/42fde0f64646433dabda48976ee8d2d8d1351d5660c8c169565b5705da8747fa/globalmount","target_path":"/var/lib/kubelet/pods/b6221792-f1b6-4e7a-9bd0-278dfb6f31ee/volumes/kubernetes.io~csi/pvc-6ce2517f-9b43-47ab-8775-124049f0d567/mount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["debug"]}},"access_mode":{"mode":7}},"volume_context":{"clusterID":"04a5e96c-cc3a-11ed-978b-f1c93a24551d","fsName":"k8s_hdd","mounter":"kernel","pool":"ecpool-k2-m1-hdd","storage.kubernetes.io/csiProvisionerIdentity":"1690872337553-8081-cephfs-hdd.csi.ceph.com","subvolumeName":"csi-vol-88448034-fdd1-4e8c-9178-7eb9ec3df86b","subvolumePath":"/volumes/csi/csi-vol-88448034-fdd1-4e8c-9178-7eb9ec3df86b/eb063f4c-f20a-41f0-9d83-f24f9ad985fb"},"volume_id":"0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b"}
I1012 06:35:19.566952 3318604 cephcmds.go:105] ID: 1774200 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b command succeeded: mount [-o bind,_netdev,debug /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs-hdd.csi.ceph.com/42fde0f64646433dabda48976ee8d2d8d1351d5660c8c169565b5705da8747fa/globalmount /var/lib/kubelet/pods/b6221792-f1b6-4e7a-9bd0-278dfb6f31ee/volumes/kubernetes.io~csi/pvc-6ce2517f-9b43-47ab-8775-124049f0d567/mount]
I1012 06:35:19.566998 3318604 nodeserver.go:523] ID: 1774200 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b cephfs: successfully bind-mounted volume 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b to /var/lib/kubelet/pods/b6221792-f1b6-4e7a-9bd0-278dfb6f31ee/volumes/kubernetes.io~csi/pvc-6ce2517f-9b43-47ab-8775-124049f0d567/mount
I1012 06:35:19.567030 3318604 utils.go:212] ID: 1774200 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b GRPC response: {}
I1012 06:35:22.678143 3318604 utils.go:195] ID: 1774201 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1012 06:35:22.678236 3318604 utils.go:206] ID: 1774201 GRPC request: {}
I1012 06:35:22.678430 3318604 utils.go:212] ID: 1774201 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1012 06:35:22.680089 3318604 utils.go:195] ID: 1774202 GRPC call: /csi.v1.Node/NodeGetVolumeStats

csi-cephfsplugin container log on node2

I1012 06:35:19.425317 3318604 utils.go:195] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b GRPC call: /csi.v1.Node/NodeStageVolume
I1012 06:35:19.425511 3318604 utils.go:206] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/cephfs-hdd.csi.ceph.com/42fde0f64646433dabda48976ee8d2d8d1351d5660c8c169565b5705da8747fa/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["debug"]}},"access_mode":{"mode":7}},"volume_context":{"clusterID":"04a5e96c-cc3a-11ed-978b-f1c93a24551d","fsName":"k8s_hdd","mounter":"kernel","pool":"ecpool-k2-m1-hdd","storage.kubernetes.io/csiProvisionerIdentity":"1690872337553-8081-cephfs-hdd.csi.ceph.com","subvolumeName":"csi-vol-88448034-fdd1-4e8c-9178-7eb9ec3df86b","subvolumePath":"/volumes/csi/csi-vol-88448034-fdd1-4e8c-9178-7eb9ec3df86b/eb063f4c-f20a-41f0-9d83-f24f9ad985fb"},"volume_id":"0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b"}
I1012 06:35:19.433040 3318604 omap.go:88] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b got omap values: (pool="cephfs.k8s_hdd.meta", namespace="csi", name="csi.volume.88448034-fdd1-4e8c-9178-7eb9ec3df86b"): map[csi.imagename:csi-vol-88448034-fdd1-4e8c-9178-7eb9ec3df86b csi.volname:pvc-6ce2517f-9b43-47ab-8775-124049f0d567 csi.volume.owner:xuda]
I1012 06:35:19.455312 3318604 nodeserver.go:293] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b cephfs: mounting volume 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b with Ceph kernel client
I1012 06:35:19.463030 3318604 cephcmds.go:105] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b command succeeded: modprobe [ceph]
I1012 06:35:19.555319 3318604 cephcmds.go:105] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b command succeeded: mount [-t ceph ****:6789,****:6789:/volumes/csi/csi-vol-88448034-fdd1-4e8c-9178-7eb9ec3df86b/eb063f4c-f20a-41f0-9d83-f24f9ad985fb /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs-hdd.csi.ceph.com/42fde0f64646433dabda48976ee8d2d8d1351d5660c8c169565b5705da8747fa/globalmount -o name=k8s_hdd,secretfile=/tmp/csi/keys/keyfile-1129524025,mds_namespace=k8s_hdd,_netdev]
I1012 06:35:19.555387 3318604 nodeserver.go:248] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b cephfs: successfully mounted volume 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b to /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs-hdd.csi.ceph.com/42fde0f64646433dabda48976ee8d2d8d1351d5660c8c169565b5705da8747fa/globalmount
I1012 06:35:19.555444 3318604 utils.go:212] ID: 1774196 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b GRPC response: {}
I1012 06:35:19.556547 3318604 utils.go:195] ID: 1774197 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1012 06:35:19.556577 3318604 utils.go:206] ID: 1774197 GRPC request: {}
I1012 06:35:19.556699 3318604 utils.go:212] ID: 1774197 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1012 06:35:19.557459 3318604 utils.go:195] ID: 1774198 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1012 06:35:19.557485 3318604 utils.go:206] ID: 1774198 GRPC request: {}
I1012 06:35:19.557573 3318604 utils.go:212] ID: 1774198 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1012 06:35:19.558279 3318604 utils.go:195] ID: 1774199 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1012 06:35:19.558305 3318604 utils.go:206] ID: 1774199 GRPC request: {}
I1012 06:35:19.558395 3318604 utils.go:212] ID: 1774199 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1012 06:35:19.559174 3318604 utils.go:195] ID: 1774200 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b GRPC call: /csi.v1.Node/NodePublishVolume
I1012 06:35:19.559338 3318604 utils.go:206] ID: 1774200 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/cephfs-hdd.csi.ceph.com/42fde0f64646433dabda48976ee8d2d8d1351d5660c8c169565b5705da8747fa/globalmount","target_path":"/var/lib/kubelet/pods/b6221792-f1b6-4e7a-9bd0-278dfb6f31ee/volumes/kubernetes.io~csi/pvc-6ce2517f-9b43-47ab-8775-124049f0d567/mount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["debug"]}},"access_mode":{"mode":7}},"volume_context":{"clusterID":"04a5e96c-cc3a-11ed-978b-f1c93a24551d","fsName":"k8s_hdd","mounter":"kernel","pool":"ecpool-k2-m1-hdd","storage.kubernetes.io/csiProvisionerIdentity":"1690872337553-8081-cephfs-hdd.csi.ceph.com","subvolumeName":"csi-vol-88448034-fdd1-4e8c-9178-7eb9ec3df86b","subvolumePath":"/volumes/csi/csi-vol-88448034-fdd1-4e8c-9178-7eb9ec3df86b/eb063f4c-f20a-41f0-9d83-f24f9ad985fb"},"volume_id":"0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b"}
I1012 06:35:19.566952 3318604 cephcmds.go:105] ID: 1774200 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b command succeeded: mount [-o bind,_netdev,debug /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs-hdd.csi.ceph.com/42fde0f64646433dabda48976ee8d2d8d1351d5660c8c169565b5705da8747fa/globalmount /var/lib/kubelet/pods/b6221792-f1b6-4e7a-9bd0-278dfb6f31ee/volumes/kubernetes.io~csi/pvc-6ce2517f-9b43-47ab-8775-124049f0d567/mount]
I1012 06:35:19.566998 3318604 nodeserver.go:523] ID: 1774200 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b cephfs: successfully bind-mounted volume 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b to /var/lib/kubelet/pods/b6221792-f1b6-4e7a-9bd0-278dfb6f31ee/volumes/kubernetes.io~csi/pvc-6ce2517f-9b43-47ab-8775-124049f0d567/mount
I1012 06:35:19.567030 3318604 utils.go:212] ID: 1774200 Req-ID: 0001-0024-04a5e96c-cc3a-11ed-978b-f1c93a24551d-0000000000000002-88448034-fdd1-4e8c-9178-7eb9ec3df86b GRPC response: {}
I1012 06:35:22.678143 3318604 utils.go:195] ID: 1774201 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1012 06:35:22.678236 3318604 utils.go:206] ID: 1774201 GRPC request: {}
Madhu-1 commented 1 year ago

This happens because we dont have Attacher Sidecar running now for cephfs. https://github.com/ceph/ceph-csi/pull/3149, In another case kubernetes should take care of it as the name ReadWriteOnce means it should not allow pod to be started on two nodes using RWO PVC.

adux6991 commented 1 year ago

Thanks for your quick response! FYI I found an issue in another csi driver which also claims that neither kubernetes nor csi driver will apply the enforcement.

Although I still think that csi driver should be responsible for the enforcement (If the attacher sidecar could do this, why it is regarded as unnecessary in https://github.com/ceph/ceph-csi/pull/3149?), please close this as you wish.

Madhu-1 commented 1 year ago

@adux6991 Thanks for pointing to the issue, it was removed for cephfs to have better performance, for now am closing this issue as it's not an issue we can solve 100% from our side. As an app template creator, i would take caution to ensure all my replicas runs on the same Node if am using RWO PVC. if i want the HA i will ensure that RWX is the PVC.