bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.97k stars 9.2k forks source link

[bitnami/redis] StatefulSet PVC template does not set common labels #20373

Open asafsgit opened 1 year ago

asafsgit commented 1 year ago

Name and Version

bitnami/redis 18.1.6

What architecture are you using?

amd64

What steps will reproduce the bug?

  1. With config
commonLabels:
  foo: bar
  1. helm install oci://registry-1.docker.io/bitnamicharts/redis -f C:\GitHub\QM_Apollo\hydra\Redis\redis-values-test.yaml --dry-run --generate-name
  2. Statefulset Persistence Volume Claims template does not include the common labels

Are you using any custom parameters or values?

No response

What is the expected behavior?

The stateful set should have rendered the common labels for the PVC template:

 volumeClaimTemplates:
    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: redis-data
        labels:
          foo: bar
          app.kubernetes.io/name: redis
          ....
      spec:

What do you see instead?

volumeClaimTemplates:
    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: redis-data
        labels:
          app.kubernetes.io/instance: redis-1698119788
          app.kubernetes.io/name: redis
          app.kubernetes.io/component: master

Additional information

seems related to this issue: #20264

If I change in the statefulset template from

{{- $claimLabels := include "common.tplvalues.merge" ( dict "values" ( list .Values.master.persistence.labels .Values.commonLabels ) "context" . ) }}
        labels: {{- include "common.labels.matchLabels" ( dict "customLabels" $claimLabels "context" $ ) | nindent 10 }}

to

{{- $claimLabels := include "common.tplvalues.merge" ( dict "values" ( list .Values.master.persistence.labels .Values.commonLabels ) "context" . ) }}
        labels: {{- include "common.labels.standard" ( dict "customLabels" $claimLabels "context" $ ) | nindent 10 }}

I can see my label

javsalgar commented 1 year ago

Hi,

I believe that this has to do with breaking changes. If we add commonLabels to the PVCs, then we need to bump the major version because there would be installations using commonLabels that will break after updates. Therefore, I think we were waiting for a major bump to add that section.

asafsgit commented 1 year ago

Thanks @javsalgar!

Please mind that it is not only commonLabels. This config also doesn't work:

master:
  persistence:
    labels:
      foo: bar

Will wait for your next major release

italoiz commented 9 months ago

It seems this is happening with the RabbitMQ chart too. I tried to set commonLabels and also persistence.labels, and it didn't take any effect on created PVCs.

Below is what my values.yaml looks like right now.

values.yaml ```yaml # ... rest of file above ## @param commonLabels Labels to add to all deployed objects ## commonLabels: recurring-job.longhorn.io/source: enabled recurring-job-group.longhorn.io/three-times-day-backup: enabled # ... ## @section Persistence parameters ## persistence: ## @param persistence.enabled Enable RabbitMQ data persistence using PVC ## enabled: true ## @param persistence.storageClass PVC Storage Class for RabbitMQ data volume ## If defined, storageClassName: ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS & OpenStack) ## storageClass: longhorn ## @param persistence.selector Selector to match an existing Persistent Volume ## selector: ## matchLabels: ## app: my-app ## # selector: {} ## @param persistence.accessModes PVC Access Modes for RabbitMQ data volume ## accessModes: - ReadWriteOnce ## @param persistence.existingClaim Provide an existing PersistentVolumeClaims ## The value is evaluated as a template ## So, for example, the name can depend on .Release or .Chart ## # existingClaim: rabbitmq-pvc ## @param persistence.mountPath The path the volume will be mounted at ## Note: useful when using custom RabbitMQ images ## # mountPath: /bitnami/rabbitmq/mnesia ## @param persistence.subPath The subdirectory of the volume to mount to ## Useful in dev environments and one PV for multiple services ## # subPath: "" ## @param persistence.size PVC Storage Request for RabbitMQ data volume ## If you change this value, you might have to adjust `rabbitmq.diskFreeLimit` as well ## size: 3Gi ## @param persistence.annotations Persistence annotations. Evaluated as a template ## Example: ## annotations: ## example.io/disk-volume-type: SSD ## # annotations: {} ## @param persistence.labels Persistence labels. Evaluated as a template ## Example: ## labels: ## app: my-app labels: recurring-job.longhorn.io/source: enabled recurring-job-group.longhorn.io/three-times-day-backup: enabled ```

Am I doing something wrong?

javsalgar commented 9 months ago

Hi,

In the RabbitMQ chart the persistence.labels value is being considered

    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: data
        {{- $claimLabels := include "common.tplvalues.merge" ( dict "values" ( list .Values.persistence.labels .Values.commonLabels ) "context" . ) }}
        labels: {{- include "common.labels.matchLabels" ( dict "customLabels" $claimLabels "context" $ ) | nindent 10 }}
        {{- if .Values.persistence.annotations }}
        annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.persistence.annotations "context" $) | nindent 10 }}
        {{- end }}

Could you confirm that you are using the latest version of the chart?

italoiz commented 9 months ago

Hi @javsalgar. I would say that I'm using the latest version, but yesterday, they released 12.7.0, which didn't touch things like this.

Below are my releases extract with the command: helm list -n rabbitmq-system.

[
  {
    "name": "rabbitmq",
    "namespace": "rabbitmq-system",
    "revision": "1",
    "updated": "2024-01-16 12:19:46.021737506 -0300 -03",
    "status": "deployed",
    "chart": "rabbitmq-12.6.3",
    "app_version": "3.12.12"
  },
  {
    "name": "rabbitmq-cluster-operator",
    "namespace": "rabbitmq-system",
    "revision": "1",
    "updated": "2024-01-15 20:56:20.198847595 -0300 -03",
    "status": "deployed",
    "chart": "rabbitmq-cluster-operator-3.10.10",
    "app_version": "2.6.0"
  }
]
javsalgar commented 9 months ago

Moving it to a new conversation https://github.com/bitnami/charts/issues/22364