bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
9.05k stars 9.22k forks source link

[bitnami/kafka] Failed to set nodeSelector or affinity for kafka and zookeeper pods #10181

Closed qualitesys closed 2 years ago

qualitesys commented 2 years ago

Name and Version

bitnami/kafka kafka-16.2.7

What steps will reproduce the bug?

I have created a Kubernetes (Scaleway multi cloud KOSMOS) with 3 pools :

A want to start the kafka pods in the pool-kafka pool.

I wish to force the nodeSelector according to a custom rule (label k8s.scaleway.com/pool-name = pool-kafka) I have tried to set the helm-values.yaml file with

nodeSelector:
  k8s.scaleway.com/pool-name: pool-kafka

with no success

I have tried to set the helm-values.yaml file with

spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms: 
        - matchExpressions:
          - key: k8s.scaleway.com/pool-name
            operator: In
            values: pool-kafka

with no success

Are you using any custom parameters or values?

No response

What is the expected behavior?

The kafka-0 and zookeeper-0 pods should respect the condition k8s.scaleway.com/pool-name = pool-kafka

What do you see instead?

The two pods are started in another pool, seems to be the default pool-bbd

Helm version

version.BuildInfo{Version:"v3.8.1", GitCommit:"5cb9af4b1b271d11d7a97a71df3ac337dd94ad37", GitTreeState:"clean", GoVersion:"go1.17.5"}

Kubectl version

Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:32:02Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"linux/amd64"}

Additional information

No response

javsalgar commented 2 years ago

Hi,

Could you show the rendered yaml for the statefulset? Just to ensure that it has the proper section defined?

qualitesys commented 2 years ago

My kafka-helm-values.yaml is there :

logRetentionHours: 1
logSegmentBytes: "5000000"
nodeSelector:
  k8s.scaleway.com/pool-name: pool-kafka

After new tests, the kafka-0 pod is in the pool-kafka pool, but kafka-zookeeper-0 is not

Then nodeSelector is not applied to the kafka-zookeeper pod as it should

javsalgar commented 2 years ago

Hi,

I tried rendering the kafka statefulset and I can see that the section is indeed there:

# Source: kafka/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
...
      nodeSelector:
        k8s.scaleway.com/pool-name: pool-kafka

In this case, I believe that the issue is not in the helm chart but in the Kubernetes cluster configuration. Could you check with Scaleway support in this case?

github-actions[bot] commented 2 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

qualitesys commented 2 years ago

After several tests, I have fixed this issue using this kafka-helm-values.yaml file

logRetentionHours: 1
logSegmentBytes: "5000000"
# Configuration for the kafka-0 pod
affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
            - matchExpressions:
              # cherche pool-kafka
                - key: k8s.scaleway.com/pool-name
                  operator: In
                  values:
                  - pool-kafka
# Configuration for the kafka-zookeeper pod
zookeeper:
    affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
                - matchExpressions:
                  # cherche pool-kafka
                    - key: k8s.scaleway.com/pool-name
                      operator: In
                      values:
                      - pool-kafka

Might interesting to provide some documentation or examples for this specifc need, thanks in advance

javsalgar commented 2 years ago

Thanks for letting us know!

qualitesys commented 2 years ago

Might be great to :

Up to the design decisions of this bitnami/kafka helm

javsalgar commented 2 years ago

Hi,

Thanks for the input! Right now we want through a set of standardizations on all of the helm charts and, for now, we plan to stick with the current design to ensure that all charts have the same UX. However, I forwarded this feedback to the team for future standardizations.

github-actions[bot] commented 2 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 2 years ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

ollyde commented 1 year ago

I also have the same issue, but the steps here didn't resolve it. I simply want to target a NodePool from scaleway that has a tag and a name.

The following doesn't work.

apiVersion: batch/v1
kind: Job
metadata:
  name: test-job
spec:
  parallelism: 100
  template:
    metadata:
      name: test-job
    spec:
      nodeSelector:
        nodePool: k8s.scaleway.com/pool-name: background-processing-pool
      containers:
        - name: test-job
          image: eu.gcr.io/test-dev/test-api:latest
          command: []
          resources:
            requests:
              cpu: '4000m'
              memory: '8Gi'
      volumes:
        - name: google-cloud-key
          secret:
            secretName: 4e97c38cfd241dd
        - name: temp-storage
          emptyDir: {}
      restartPolicy: Never
  backoffLimit: 0