openebs / zfs-localpv

Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend ZFS data storage stack.
https://openebs.io
Apache License 2.0
443 stars 106 forks source link

"Device already mounted at /var/lib/kubelet/pods" with a shared=yes ZFS dataset #497

Open etlfg opened 10 months ago

etlfg commented 10 months ago

What steps did you take and what happened:

I can't get two pods running on the same node to access to one given dataset.

I've been reading all the issues and docs I can about sharing a dataset between pods.

But I can't get out of MountVolume.SetUp failed for volume "consume-pv" : rpc error: code = Internal desc = rpc error: code = Internal desc = verifyMount: device already mounted at [/var/lib/k0s/kubelet/pods/7f3fc9cd-5e94-4d32-9e59-3ae0caa41fc4/volumes/kubernetes.io~csi/import-pv/mount /host/var/lib/k0s/kubelet/pods/7f3fc9cd-5e94-4d32-9e59-3ae0caa41fc4/volumes/kubernetes.io~csi/import-pv/mount]

I don't get either the shared-yes param in my ZFSVolume as stated https://github.com/openebs/zfs-localpv/issues/152#issuecomment-653511445

Here are my curated resources :

kubectl get sc -n openebs zfs-import -o yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: zfs-import
parameters:
  compression: "off"
  dedup: "off"
  fstype: zfs
  poolname: data/import
  recordsize: 16k
  shared: "yes"
  thinprovision: "no"
provisioner: zfs.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: Immediate
kubectl zv -n openebs import -o yaml
apiVersion: zfs.openebs.io/v1
kind: ZFSVolume
metadata:
  name: import
  namespace: openebs
spec:
  capacity: "1073741824000"
  fsType: zfs
  ownerNodeID: main
  poolName: data
  volumeType: DATASET
status:
  state: Ready
apiVersion: v1
kind: PersistentVolume
metadata:
  finalizers:
  - kubernetes.io/pv-protection
  name: import-pv
spec:
  accessModes:
  - ReadWriteOnce # Tried ReadWriteMany just in case, doesn't work as expected
  capacity:
    storage: 1Ti
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: import-pvc
    namespace: default
  csi:
    driver: zfs.csi.openebs.io
    fsType: zfs
    volumeAttributes:
      openebs.io/poolname: data
    volumeHandle: import
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - main
  persistentVolumeReclaimPolicy: Retain
  storageClassName: zfs-import
  volumeMode: Filesystem
status:
  phase: Bound
kubectl get pvc import-pvc -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  finalizers:
  - kubernetes.io/pvc-protection
  name: import-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Ti
  storageClassName: zfs-import
  volumeMode: Filesystem
  volumeName: import-pv
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Ti
  phase: Bound

What did you expect to happen:

I'm expecting that the two pods share the same ZFS dataset to get only one destination to put my files (the two applications have dicern concerns depending on the files put in It).

Environment:

w3aman commented 1 month ago

Hi @etlfg, I was trying to use shared mount also, and was able to do it successfully. So i thought to give it a try with yamls you provided, and here as well it worked for me. I can see shared: yes in -o yaml of zfsvolume CR. Does this issue still persist for you? I would suggest to give it a try once again.

One point i want to check that:

  1. In your storage class yaml i see that poolname: data/import but in your zfs volume and PV yamls i see that it is only poolName: data. Can you confirm that by any chance your storage class yaml was different while provisiong the volume ?