Open b1r3k opened 4 weeks ago
@b1r3k The volumeHandle: vm-102-disk-0
that you have provided in the persistent volume spec.csi seems to be incorrect right? There is no vm-102-disk-0
named zv in the cluster(atleast from the zvs that you have pasted). The only ones I see is pvc-1df0d67f-1574-4b1b-87c3-a6c7dce19430
and pvc-3999ecbd-450c-4a8e-96a9-306c223c36e3
@b1r3k The
volumeHandle: vm-102-disk-0
that you have provided in the persistent volume spec.csi seems to be incorrect right? There is novm-102-disk-0
named zv in the cluster(atleast from the zvs that you have pasted). The only ones I see ispvc-1df0d67f-1574-4b1b-87c3-a6c7dce19430
andpvc-3999ecbd-450c-4a8e-96a9-306c223c36e3
that's surprising since volume can be seen using zfs list
. pvc-1df0d67f-1574-4b1b-87c3-a6c7dce19430
and pvc-3999ecbd-450c-4a8e-96a9-306c223c36e3
are on different node (jester
) where openebs-zfs created volumes from scratch. I'm trying to import ZVOL residing on node talos-qsn-ail
- this ZVOL was created outside of openebs-zfs system. Since https://github.com/openebs/zfs-localpv/blob/develop/docs/import-existing-volume.md describes importing ZVOL I assumed such volume does not have to be created by openebs-zfs.
root@zfs-utils:/# zfs list
NAME USED AVAIL REFER MOUNTPOINT
granary 3.06T 470G 160K /granary
granary/backups 101G 470G 101G /granary/backups
granary/docker-registry 128K 470G 128K /granary/docker-registry
granary/home 1.12T 470G 1.12T /granary/home
granary/private-backups 635G 470G 635G /granary/private-backups
granary/subvol-100-disk-0 720M 7.30G 719M /granary/subvol-100-disk-0
granary/vm-101-disk-0 658M 470G 658M -
granary/vm-102-disk-0 459G 925G 4.39G -
granary/vm-102-disk-1 61.9G 514G 18.2G -
granary/vm-102-disk-2 724G 1.13T 41.7G -
granary/vm-102-disk-3 629M 470G 629M -
Did you do this step here? https://github.com/openebs/zfs-localpv/blob/develop/docs/import-existing-volume.md#step-2--attach-the-volume-with-localpv-zfs i.e. creation of the ZV CR of openebs zfs?
What steps did you take and what happened:
I'm following doc [zfs-localpv/docs/import-existing-volume.md at develop · openebs/zfs-localpv](https://github.com/openebs/zfs-localpv/blob/develop/docs/import-existing-volume.md) but my ZFS volume has XFS filesystem on top of it.
I've created PV:
but with
fsType: "xfs"
as you can see. Unfortunately I'm getting error:kubelet MountVolume.SetUp failed for volume "pv-cockroachdb-data-2" : rpc error: code = Internal desc = zfsvolumes.zfs.openebs.io "vm-102-disk-0" not found
What did you expect to happen:
I'd like to mount ZFS volume with XFS file system
The output of the following commands will help us better understand what's going on: (Pasting long output into a GitHub gist or other Pastebin is fine.)
kubectl logs -f openebs-zfs-controller-f78f7467c-blr7q -n openebs -c openebs-zfs-plugin
kubectl logs -f openebs-zfs-controller-f78f7467c-blr7q -n openebs -c openebs-zfs-plugin
kubectl logs -f openebs-zfs-node-[xxxx] -n openebs -c openebs-zfs-plugin
kubectl logs -f openebs-zfs-node-[xxxx] -n openebs -c openebs-zfs-plugin
kubectl get pods -n openebs
kubectl get zv -A -o yaml
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
ZVOL volume I want to import sits on
talos-qsn-ail
nodeEnvironment:
kubectl version
):/etc/os-release
): Talos