bermuda-sunfish / zfs-localpv

Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend ZFS data storage stack.
https://openebs.io
Apache License 2.0
0 stars 0 forks source link

Allow control over ZFS dataset name during creation #1

Closed bermuda-sunfish closed 5 months ago

bermuda-sunfish commented 5 months ago

Describe the problem/challenge you have When the cluster is destroyed, the persistent volumes remain (by configuration). Ideally, when the cluster is recreated, the nodes which have persistent storage requirement should continue to use the "original" storage. This way, Hashicorp Vault, for example, does not need to be reinitialized after each cluster creation. The same applies to Postgresql volumes, etc.

Currently, ZFS Volumes (specifically, ZFSVolume.name becomes a zfs dataset name) are populated by CSI with a pattern PV-[UUID], which gets passed to the ZFSVolume and, consequently, the zfs dataset name.

Describe the solution you'd like I would like to be able to influence the zfs dataset name by Kubernetes resource configuration. For example, be able to specify that the zfs dataset name is: PV-[namespace]-[pvc-name].

This way, when the original cluster is destroyed, and the new one is created, the kubernetes resource definitions are recreated, but the ZFS datasets are reused.

Anything else you would like to add: This should be an option that is passed to a driver, e.g. by the means of a Kubernetes annotation or label, "openebs.io/pvc-names-zfs-dataset: true" or something similar. So if the annotation is not present, the current behavior is preserved.

Additionally, a user should consider cases when multiple kubernetes clusters are deployed on the same physical storage, or a zfs dataset can be mounted to multiple pods in different clusters, for example.