Closed qualitesys closed 2 years ago
Hi,
Could you show the rendered yaml for the statefulset? Just to ensure that it has the proper section defined?
My kafka-helm-values.yaml is there :
logRetentionHours: 1
logSegmentBytes: "5000000"
nodeSelector:
k8s.scaleway.com/pool-name: pool-kafka
After new tests, the kafka-0 pod is in the pool-kafka pool, but kafka-zookeeper-0 is not
Then nodeSelector is not applied to the kafka-zookeeper pod as it should
Hi,
I tried rendering the kafka statefulset and I can see that the section is indeed there:
# Source: kafka/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
...
nodeSelector:
k8s.scaleway.com/pool-name: pool-kafka
In this case, I believe that the issue is not in the helm chart but in the Kubernetes cluster configuration. Could you check with Scaleway support in this case?
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
After several tests, I have fixed this issue using this kafka-helm-values.yaml file
logRetentionHours: 1
logSegmentBytes: "5000000"
# Configuration for the kafka-0 pod
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
# cherche pool-kafka
- key: k8s.scaleway.com/pool-name
operator: In
values:
- pool-kafka
# Configuration for the kafka-zookeeper pod
zookeeper:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
# cherche pool-kafka
- key: k8s.scaleway.com/pool-name
operator: In
values:
- pool-kafka
Might interesting to provide some documentation or examples for this specifc need, thanks in advance
Thanks for letting us know!
Might be great to :
Up to the design decisions of this bitnami/kafka helm
Hi,
Thanks for the input! Right now we want through a set of standardizations on all of the helm charts and, for now, we plan to stick with the current design to ensure that all charts have the same UX. However, I forwarded this feedback to the team for future standardizations.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
I also have the same issue, but the steps here didn't resolve it. I simply want to target a NodePool from scaleway that has a tag and a name.
The following doesn't work.
apiVersion: batch/v1
kind: Job
metadata:
name: test-job
spec:
parallelism: 100
template:
metadata:
name: test-job
spec:
nodeSelector:
nodePool: k8s.scaleway.com/pool-name: background-processing-pool
containers:
- name: test-job
image: eu.gcr.io/test-dev/test-api:latest
command: []
resources:
requests:
cpu: '4000m'
memory: '8Gi'
volumes:
- name: google-cloud-key
secret:
secretName: 4e97c38cfd241dd
- name: temp-storage
emptyDir: {}
restartPolicy: Never
backoffLimit: 0
Name and Version
bitnami/kafka kafka-16.2.7
What steps will reproduce the bug?
I have created a Kubernetes (Scaleway multi cloud KOSMOS) with 3 pools :
A want to start the kafka pods in the pool-kafka pool.
I wish to force the nodeSelector according to a custom rule (label k8s.scaleway.com/pool-name = pool-kafka) I have tried to set the helm-values.yaml file with
with no success
I have tried to set the helm-values.yaml file with
with no success
Are you using any custom parameters or values?
No response
What is the expected behavior?
The kafka-0 and zookeeper-0 pods should respect the condition k8s.scaleway.com/pool-name = pool-kafka
What do you see instead?
The two pods are started in another pool, seems to be the default pool-bbd
Helm version
Kubectl version
Additional information
No response