openebs / mayastor

Dynamically provision Stateful Persistent Replicated Cluster-wide Fabric Volumes & Filesystems for Kubernetes that is provisioned from an optimized NVME SPDK backend data storage stack.
Apache License 2.0
753 stars 109 forks source link

Provide a mechanism to create DiskPool CRs #1758

Open cmontemuino opened 1 month ago

cmontemuino commented 1 month ago

Is your feature request related to a problem? Please describe.

It would be very helpful to have a mechanism for defining DiskPool objects in this same chart.

This way, we don't need to specify the DiskPool CRs by other means (e.g., a separate chart, separate manifests managed by kustomize, etc.), which ultimately leads to a non-cohesive approach.

Describe the solution you'd like

A similar mechanism as used for defining the StorageClass CR: https://github.com/openebs/mayastor-extensions/blob/develop/chart/templates/storageclass.yaml

Proposed solution: chart/templates/diskpool.yaml

{{ if .Values.diskPools.enabled }}
{{- if .Capabilities.APIVersions.Has "openebs.io/v1beta2/DiskPool" -}}
{{- $releaseName := .Release.Name -}}
{{- $diskPools := .Values.diskPools -}}
{{- range $node, $disks := $diskPools.nodes }}
{{- range $index, $disk := $disks }}
{{- $dpName := (printf "%s-pool-on-%s-disk-%d" $releaseName $node $index | trunc 63) }}
apiVersion: "openebs.io/v1beta2"
kind: DiskPool
metadata:
  {{- if $diskPools.annotations }}
  annotations: {{ toYaml $diskPools.annotations | nindent 4 }}
  {{- end }}
  name: {{ $dpName }}
spec:
  node: {{ $node | trunc 63 }}
  disks: [{{ $disk | quote }}]
---
{{- end }}
{{- end }}
{{- end -}}
{{- end -}}

values.yaml:

#...
diskPools:
  enabled: false
# annotations:
#   argocd.argoproj.io/sync-options: Delete=false,Prune=false
# nodes:
#   node-1:
#   - /dev/disk/by-id/nvme-eui.0050432100000001 
#   node-2:
#   - /dev/disk/by-id/nvme-eui.0050432100000002
#   - /dev/disk/by-id/nvme-eui.0050432100000003   

#...

Describe alternatives you've considered

We currently keep an tweaked version of this Chart with the proposal from above.

Our first approach was to create the CRs in a separate Chart. We've abandoned that idea because of the increased cognitive load.

Additional context None

cmontemuino commented 1 month ago

I'm open to create a PR with the solution we're currently using.

sinhaashish commented 1 month ago

First of all, thank you for your interest in submitting the PR.

As far as I know, the charts used for installation follow a standard format, and we generally don't encourage (though we don’t prohibit it either, as it's up to the user) editing the values.yaml file before applying it.

With your proposed feature, users would need to configure the node name and disk in the values.yaml file for the DiskPool CR to be created."

The storage class example you referred to here creates a default storage class that doesn't require customization. It's intended for users to benefit from having a pre-provisioned default storage class available for immediate use.

Open to suggestions here. @avishnu @tiagolobocastro

cmontemuino commented 1 month ago

First of all, thank you for your interest in submitting the PR.

As far as I know, the charts used for installation follow a standard format, and we generally don't encourage (though we don’t prohibit it either, as it's up to the user) editing the values.yaml file before applying it.

With your proposed feature, users would need to configure the node name and disk in the values.yaml file for the DiskPool CR to be created."

The storage class example you referred to here creates a default storage class that doesn't require customization. It's intended for users to benefit from having a pre-provisioned default storage class available for immediate use.

Open to suggestions here. @avishnu @tiagolobocastro

Thanks for the feedback @sinhaashish! I was thinking about an optional feature; for the users wanting to take advantage of it only.

It's similar to other helm chart's offering when you want to create extra resources in one single go. Inspiration from argocd

tiagolobocastro commented 1 month ago

I think this would good, I don't think this requires users to edit the helm chart, on the contrary this means they can use the upstream chart and simply pass -f extra.yaml

How would it work if a user enables diskpools and then disables them? Does helm forcefully remove the pool CR by deleting and removing finalyzers, or does it simply issue delete?

cmontemuino commented 1 month ago

I think this would good, I don't think this requires users to edit the helm chart, on the contrary this means they can use the upstream chart and simply pass -f extra.yaml

How would it work if a user enables diskpools and then disables them? Does helm forcefully remove the pool CR by deleting and removing finalyzers, or does it simply issue delete?

If the user disables the diskpool and it triggers removing the CRD, then the DiskPool resources should be removed because of:

{{- if .Capabilities.APIVersions.Has "openebs.io/v1beta2/DiskPool" -}}
...
{{- end }}

Otherwise, the CR would keep in etcd