piraeusdatastore / piraeus-operator

The Piraeus Operator manages LINSTOR clusters in Kubernetes.
https://piraeus.io/
Apache License 2.0
379 stars 60 forks source link

Importing/Mounting pre-existing volumes in linstor/DRBD #660

Open bernardgut opened 3 months ago

bernardgut commented 3 months ago

Hello

Lets say I run a Talos cluster with some Piraeus/Linstor/DRBD backed by ZFSTHIN datasets (or LVM PVs for that mater). Lets say I recreate the cluster for xyz reason (I lost the Piraeus operator state). The storage datasets (PVs) are still there on the nodes physical disks. How would you "import" them back into Piraeus/Linstor/DRBD CSI stack (If possible at all) ??

I will give you a concrete example :

This morning a nuked a cluster. Upon recreation the cluster is "clean" (k linstor resource list-volumes returns null). On two of the nodes I have :

/ # zfs list
NAME                                                     USED  AVAIL  REFER  MOUNTPOINT
zpool-1                                                 25.5G  22.5G    96K  /zpool-1
zpool-1/pvc-4888c2c6-f3c9-4ffa-b2de-ee2498055ef2_00000  21.9G  22.5G  21.9G  -

Is there any "easy" way for the operator to recreate automagically the linstor resources associated with this pvc such that I can then bind it to an existing workload ? How would you go on about this ?

thanks B.

WanzenBug commented 3 months ago

Is there any "easy" way for the operator to recreate automagically the linstor resources associated with this pvc such that I can then bind it to an existing workload.

The answer is no. If you nuke the kube API LINSTOR will lose it's state. You could create a script that tries to find the "most plausible" state by looking at existing backing devices and DRBD metadata (i.e. all zvols and LVs will be named "pvc-_00000".

If you then use that collected information to make the appropriate API requests to LINSTOR, you could get LINSTOR to nearly the original state. Afterwards you would just have to translate the LINSTOR resources to PV and PVCs, which should be relatively straight forward.

bernardgut commented 3 months ago

I see. I guess at that point it is just easier to do somethging along the lines of

  1. create a new pvc along with the new workload
  2. Log into one of the nodes, zfs send originpool/old-pvc@snapshot | zfs receive destinationpool/new-pvc
  3. Let DRBD do its thing

that should work right ?