openebs / zfs-localpv

Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend ZFS data storage stack.
https://openebs.io
Apache License 2.0
394 stars 98 forks source link

Cannot import existing ZVOL with XFS file system #550

Open b1r3k opened 4 weeks ago

b1r3k commented 4 weeks ago

What steps did you take and what happened:

I'm following doc [zfs-localpv/docs/import-existing-volume.md at develop · openebs/zfs-localpv](https://github.com/openebs/zfs-localpv/blob/develop/docs/import-existing-volume.md) but my ZFS volume has XFS filesystem on top of it.

I've created PV:

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-cockroachdb-data-2
spec:
  capacity:
    storage: 333Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: openebs-zfspv-imported
  csi:
    driver: zfs.csi.openebs.io
    fsType: "xfs"
    volumeAttributes:
      openebs.io/poolname: granary # change the pool name accordingly
    volumeHandle: vm-102-disk-0 # This should be same as the zfs volume name
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - talos-qsn-ail

but with fsType: "xfs" as you can see. Unfortunately I'm getting error: kubelet MountVolume.SetUp failed for volume "pv-cockroachdb-data-2" : rpc error: code = Internal desc = zfsvolumes.zfs.openebs.io "vm-102-disk-0" not found

What did you expect to happen:

I'd like to mount ZFS volume with XFS file system

The output of the following commands will help us better understand what's going on: (Pasting long output into a GitHub gist or other Pastebin is fine.)

kubectl logs -f openebs-zfs-controller-f78f7467c-blr7q -n openebs -c openebs-zfs-plugin

kubectl logs -f openebs-zfs-node-[xxxx] -n openebs -c openebs-zfs-plugin

NAME                                                        READY   STATUS    RESTARTS         AGE
openebs-zfslocalpv-zfs-localpv-controller-c5b7f6b49-frn22   5/5     Running   29 (6h21m ago)   9d
openebs-zfslocalpv-zfs-localpv-controller-c5b7f6b49-xmqn9   5/5     Running   11 (6h21m ago)   47h
openebs-zfslocalpv-zfs-localpv-node-hzpxx                   2/2     Running   7 (6h18m ago)    9d
openebs-zfslocalpv-zfs-localpv-node-jw8pg                   2/2     Running   1 (8d ago)       9d
apiVersion: v1
items:
- apiVersion: zfs.openebs.io/v1
  kind: ZFSVolume
  metadata:
    creationTimestamp: "2024-06-20T16:56:17Z"
    finalizers:
    - zfs.openebs.io/finalizer
    generation: 2
    labels:
      kubernetes.io/nodename: jester
    name: pvc-1df0d67f-1574-4b1b-87c3-a6c7dce19430
    namespace: openebs-localpv-zfs
    resourceVersion: "11923971"
    uid: 56f828ea-0dc0-4228-b4d7-6c5700e70d4a
  spec:
    capacity: "322122547200"
    fsType: zfs
    ownerNodeID: jester
    poolName: zfspv-pool
    volumeType: DATASET
  status:
    state: Ready
- apiVersion: zfs.openebs.io/v1
  kind: ZFSVolume
  metadata:
    creationTimestamp: "2024-06-25T09:05:33Z"
    finalizers:
    - zfs.openebs.io/finalizer
    generation: 4
    labels:
      kubernetes.io/nodename: jester
    name: pvc-3999ecbd-450c-4a8e-96a9-306c223c36e3
    namespace: openebs-localpv-zfs
    resourceVersion: "13896769"
    uid: 6a46561a-057d-4926-9cdb-22f614f170a5
  spec:
    capacity: "537944653824"
    fsType: zfs
    ownerNodeID: jester
    poolName: zfspv-pool
    volumeType: DATASET
  status:
    state: Ready
kind: List
metadata:
  resourceVersion: ""

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

ZVOL volume I want to import sits on talos-qsn-ail node

root@zfs-utils:/# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
granary                    3.06T   470G   160K  /granary
granary/backups             101G   470G   101G  /granary/backups
granary/docker-registry     128K   470G   128K  /granary/docker-registry
granary/home               1.12T   470G  1.12T  /granary/home
granary/private-backups     635G   470G   635G  /granary/private-backups
granary/subvol-100-disk-0   720M  7.30G   719M  /granary/subvol-100-disk-0
granary/vm-101-disk-0       658M   470G   658M  -
granary/vm-102-disk-0       459G   925G  4.39G  -
granary/vm-102-disk-1      61.9G   514G  18.2G  -
granary/vm-102-disk-2       724G  1.13T  41.7G  -
granary/vm-102-disk-3       629M   470G   629M  -

Environment:

Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.26.2
Abhinandan-Purkait commented 3 weeks ago

@b1r3k The volumeHandle: vm-102-disk-0 that you have provided in the persistent volume spec.csi seems to be incorrect right? There is no vm-102-disk-0 named zv in the cluster(atleast from the zvs that you have pasted). The only ones I see is pvc-1df0d67f-1574-4b1b-87c3-a6c7dce19430 and pvc-3999ecbd-450c-4a8e-96a9-306c223c36e3

b1r3k commented 3 weeks ago

@b1r3k The volumeHandle: vm-102-disk-0 that you have provided in the persistent volume spec.csi seems to be incorrect right? There is no vm-102-disk-0 named zv in the cluster(atleast from the zvs that you have pasted). The only ones I see is pvc-1df0d67f-1574-4b1b-87c3-a6c7dce19430 and pvc-3999ecbd-450c-4a8e-96a9-306c223c36e3

that's surprising since volume can be seen using zfs list. pvc-1df0d67f-1574-4b1b-87c3-a6c7dce19430 and pvc-3999ecbd-450c-4a8e-96a9-306c223c36e3 are on different node (jester) where openebs-zfs created volumes from scratch. I'm trying to import ZVOL residing on node talos-qsn-ail - this ZVOL was created outside of openebs-zfs system. Since https://github.com/openebs/zfs-localpv/blob/develop/docs/import-existing-volume.md describes importing ZVOL I assumed such volume does not have to be created by openebs-zfs.

root@zfs-utils:/# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
granary                    3.06T   470G   160K  /granary
granary/backups             101G   470G   101G  /granary/backups
granary/docker-registry     128K   470G   128K  /granary/docker-registry
granary/home               1.12T   470G  1.12T  /granary/home
granary/private-backups     635G   470G   635G  /granary/private-backups
granary/subvol-100-disk-0   720M  7.30G   719M  /granary/subvol-100-disk-0
granary/vm-101-disk-0       658M   470G   658M  -
granary/vm-102-disk-0       459G   925G  4.39G  -
granary/vm-102-disk-1      61.9G   514G  18.2G  -
granary/vm-102-disk-2       724G  1.13T  41.7G  -
granary/vm-102-disk-3       629M   470G   629M  -
Abhinandan-Purkait commented 3 weeks ago

Did you do this step here? https://github.com/openebs/zfs-localpv/blob/develop/docs/import-existing-volume.md#step-2--attach-the-volume-with-localpv-zfs i.e. creation of the ZV CR of openebs zfs?