The PVC deletion succeeded, on ZFS side the dataset was deleted. And also, the ZFS snapshot data was deleted. Leaving the VolumeSnapshot in kubernetes invalid?
Create new PVC with the VolumeSnapshot as the dataSource
Creation fails, with the error of:
E0926 21:40:55.095372 1 zfs_util.go:468] zfs: could not clone volume zfspv-pool/pvc-78c4705a-06b2-491a-befd-2db5bc6314a1 cmd [clone -o quota=1073741824 -o recordsize=128k -o mountpoint=legacy -o dedup=off -o compression=off zfspv-pool/pvc-bb09f28f-6dd8-4064-9e26-d88ccdbe38a5@snapshot-a0e8406a-495c-42e1-8067-82258746ec22 zfspv-pool/pvc-78c4705a-06b2-491a-befd-2db5bc6314a1] error: cannot open 'zfspv-pool/pvc-bb09f28f-6dd8-4064-9e26-d88ccdbe38a5@snapshot-a0e8406a-495c-42e1-8067-82258746ec22': dataset does not exist
What did you expect to happen:
The snapshot on ZFS side remains until the VolumeSnapshot is removed, so that it could be cloned whenever.
The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)
What steps did you take and what happened:
The PVC deletion succeeded, on ZFS side the dataset was deleted. And also, the ZFS snapshot data was deleted. Leaving the VolumeSnapshot in kubernetes invalid?
Creation fails, with the error of:
What did you expect to happen:
The snapshot on ZFS side remains until the VolumeSnapshot is removed, so that it could be cloned whenever.
The output of the following commands will help us better understand what's going on: (Pasting long output into a GitHub gist or other Pastebin is fine.)
kubectl logs -f openebs-zfs-controller-f78f7467c-blr7q -n openebs -c openebs-zfs-plugin
kubectl logs -f openebs-zfs-node-[xxxx] -n openebs -c openebs-zfs-plugin
kubectl get pods -n openebs
kubectl get zv -A -o yaml
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
): v1.31.1/etc/os-release
): Talos v1.8.0