Closed PapaAAnthony closed 11 months ago
Hi @PapaAAnthony,
I haven't been able to reproduce your issue on a fresh install.
Using the values.yaml
you provided, the following error would show:
$ helm install kafka bitnami/kafka -f values.yaml
Error: INSTALLATION FAILED: execution error at (kafka/templates/NOTES.txt:333:4):
VALUES VALIDATION:
kafka: Zookeeper mode - Controller nodes not supported
Controller replicas have been enabled in Zookeeper mode, set controller.replicaCount to zero or enable migration mode to migrate to Kraft mode
After adding controller.replicaCount: 0
, Kafka successfully deployed in Zookeeper mode.
In your case, did you have any PVC from previous deployment or are you performing an upgrade from a previous version?
Please notice that the bitnami/kafka
image also received changes in the major version 24.x.x
. If you are using a custom image or an older tag, please make sure to update it.
Hi @migruiz4,
Apologies I missed off the below config:
controller:
replicaCount: 0
Additionally I should add that I was upgrading from an established cluster that already contained data in the PVC so I followed the steps outlined here: https://github.com/bitnami/charts/tree/main/bitnami/kafka#to-2400
Hi @PapaAAnthony,
In that case, could you please provide more details about your upgrade? What version were you upgrading from? What values were you using in that previous version?
The error message aims that server.properties
configures node.id: 1
while meta.properties
(PVC) has configured node.id: 0
.
The chart logic considers this case and should use the node.id stored at meta.properties:
Using your values.yaml
, rendered logic would be:
if [[ -f "/bitnami/kafka/data/meta.properties" ]]; then
if grep -q "broker.id" /bitnami/kafka/data/meta.properties; then
ID="$(grep "broker.id" /bitnami/kafka/data/meta.properties | awk -F '=' '{print $2}')"
kafka_conf_set "$KAFKA_CONFIG_FILE" "node.id" "$ID"
else
ID="$(grep "node.id" /bitnami/kafka/data/meta.properties | awk -F '=' '{print $2}')"
kafka_conf_set "$KAFKA_CONFIG_FILE" "node.id" "$ID"
fi
else
ID=$((POD_ID + KAFKA_MIN_ID))
kafka_conf_set "$KAFKA_CONFIG_FILE" "broker.id" "$ID"
fi
It would be helpful if you could also share the contents of /opt/bitnami/kafka/config/server.properties
and /bitnami/kafka/data/meta.properties
inside your pods.
So I was upgrading from 23.0.7
, in this case the value overrides I was using were minimal:
replicaCount: 3
kraft:
enabled: false
zookeeper:
enabled: true
I managed to get past the above error by adding the following:
extraEnvVars:
- name: KAFKA_ENABLE_KRAFT
value: "false"
My main question is why I now have to explicitly add the environment variable where previously it was part of the chart? Further to that I want to understand what I need to now add to get the cluster to run as it was in 23.0.7
which I upgraded from.
@PapaAAnthony the env variable KAFKA_ENABLE_KRAFT
has been removed and the chart has been radically refactored.
It now does not rely on container logic, but the configuration is now generated as a Helm template and modified in an initContainer. This change was motivated to improve the security of the chart by adding the readOnlyRootFilesystem
flag.
It is an important detail if you are using a custom value for the image.tag
as the image also included major changes.
Additionally please share these details so we can further help you:
It would be helpful if you could also share the contents of /opt/bitnami/kafka/config/server.properties and /bitnami/kafka/data/meta.properties inside your pods.
Hi, we are using version 3.5.1
of the bitnami/kafka image.
I can't currently access the server.properties/meta.properties as the pods are currently failing with the below:
java.lang.IllegalArgumentException: Error creating broker listeners from 'PLAINTEXT://:9092,CONTROLLER://:9093': No security protocol defined for listener PLAINTEXT
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Hi @PapaAAnthony,
I'm sorry for the late response.
That error message displays that your Kafka server.properties is configured with listeners=PLAINTEXT://:9092,CONTROLLER://:9093
That differs from the chart's default listeners, which are CLIENT,INTERNAL,CONTROLLER
:
listeners=CLIENT://:9092,INTERNAL://:9094,CONTROLLER://:9093
advertised.listeners=CLIENT://advertised-address-placeholder:9092,INTERNAL://advertised-address-placeholder:9094
listener.security.protocol.map=CLIENT:SASL_PLAINTEXT,INTERNAL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
Is it possible you have set listeners.overrideListeners
in your values.yaml
?
It would be helpful if you could also share the following:
kubectl get cm kafka-controller-configuration -o yaml
.kubectl describe pod kafka-<role>-0
.values.yaml
you did not provide in the issue description.This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Name and Version
bitnami/kafka 25.1.12
What architecture are you using?
None
What steps will reproduce the bug?
What is the expected behavior?
I would expect the brokers to start with KRaft disabled, as well as other default setting that allow the cluster to start up.
What do you see instead?
I am seeing the below error:
Additional information
I would've thought that setting
kraft.enabled
to false would have setKAFKA_ENABLE_KRAFT
preventing the brokers from initializing KRaft.So I'm just wondering if there is any documentation or examples that will allow me to run with the settings I had previously in v23.0.0 which were mostly the defaults that came in the chart.