nutanix / helm

Nutanix Helm Charts repository
https://nutanix.github.io/helm/
MIT License
17 stars 30 forks source link

Feature request: Support CSI Storage Driver to be installed multiple times #83

Closed olljanat closed 1 year ago

olljanat commented 1 year ago

I noticed that you already have support for legacy flag which can be used to switch between two different driver names. With that one is possible install "CSI Storage Driver" twice to same cluster (in theory at least).

However it would be useful to be able to customize driver name. Then it would possible to create Kubernetes cluster which is stretched between three AHV clusters which are located to different datacenters and run example three node RabbitMQ cluster where each of those nodes would store to their data to local AHV cluster. Then would be zero down time on that RabbitMQ cluster even during complete datacenter failure.

Other parameters like storage class and prism element connectivity details are already parameterized so only driver name customization support is really missing.

I can also create pull request about this if it is agreed that it is something which will get merged to here.

tuxtof commented 1 year ago

Hello @olljanat Usually we address this scenario with 3 StorageClass each pointing to a different AHV Cluster. No need to deploy the CSI driver three time , he already support multiple AHV driver

I let you test this approach ++

olljanat commented 1 year ago

Thanks for you tips. Good that it already can be done and it is just up to configuration.

I will test it next week but documenting here that so far I figured out that I should use createSecret: false in values.yaml.

Bundle this kind of custom secret and storageClass yaml with Helm chart deployment (our deployment tooling have separate chart-addons folder for these).

Secret:

{{- define "releaseNamespace" -}}
{{- .Release.Namespace }}
{{- end }}
{{- range $f := .Values.clusters }}
apiVersion: v1
kind: Secret
metadata:
  name: ntnx-secret-{{ $f.name }}
  namespace: {{ template "releaseNamespace" $ }}
data:
  # base64 encoded prism-ip:prism-port:admin:password.
  # E.g.: echo -n "10.83.0.91:9440:admin:mypassword" | base64
  key: {{ printf "%s:9440:%s:%s" $f.prismEndPoint $f.username $f.password | b64enc}}
---
{{- end }}

StorageClass:

{{- define "releaseService" -}}
{{- .Release.Namespace }}
{{- end }}
{{- range $f := .Values.clusters }}
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nutanix-volume-{{ $f.name }}
provisioner: csi.nutanix.com
parameters:
  csi.storage.k8s.io/provisioner-secret-name: ntnx-secret-{{ $f.name }}
  csi.storage.k8s.io/provisioner-secret-namespace: {{ template "releaseNamespace" $ }}
  csi.storage.k8s.io/node-publish-secret-name: ntnx-secret-{{ $f.name }}
  csi.storage.k8s.io/node-publish-secret-namespace: {{ template "releaseNamespace" $ }}
  csi.storage.k8s.io/controller-expand-secret-name: ntnx-secret-{{ $f.name }}
  csi.storage.k8s.io/controller-expand-secret-namespace: {{ template "releaseNamespace" $ }}
  csi.storage.k8s.io/fstype: ext4
  storageContainer: default-container
  storageType: NutanixVolumes
  description: "{{ $f.name }}"
  whitelistIPMode: ENABLED
  dataServiceEndPoint: {{ $f.dataServiceEndPoint }}
allowVolumeExpansion: true
reclaimPolicy: Retain
---
{{- end }}

so then I can dynamically create those for any number of clusters by including this to values.yaml

clusters:
- name: ntnx01
  prismEndPoint: ntnx01
  username: user
  password: pwd
  dataServiceEndPoint: 192.168.1.10:3260
- name: stp-ntnx02
  prismEndPoint: ntnx02
  username: user
  password: pwd
  dataServiceEndPoint: 192.168.2.10:3260