Closed ETisREAL closed 12 months ago
Hi,
Could you detail the commands you used for importing the client.properties file?
Hi @javsalgar :)
Following the prompts after the installation (i.e.)
kubectl run k-kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.5.1-debian-11-r44 --namespace default --command -- sleep infinity
kubectl cp --namespace default /path/to/client.properties k-kafka-client:/tmp/client.properties
kubectl exec --tty -i k-kafka-client --namespace default -- bash
I created the client.properties file manually inside the client container:
echo "security.protocol=SASL_PLAINTEXT" > /tmp/client.properties
echo "sasl.mechanism=SCRAM-SHA-256" >> /tmp/client.properties
echo 'sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="user1" password="kwkDcW66i1";' >> /tmp/client.properties
please note, the password I specified in the sasl.jaas.config comes from the output of the same command specified above in the notes: kubectl get secret k-kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1
What I am doing wrong here? I am quite puzzled :/
same issue
Hi,
I've just tested it with version 25.3.5
and I didn't face any issue. I've installed it using default values and I followed the instructions shown during the installation:
$ helm install kafka oci://registry-1.docker.io/bitnamicharts/kafka
AME: kafka
LAST DEPLOYED: Mon Oct 16 17:07:19 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 25.3.5
APP VERSION: 3.5.1
** Please be patient while the chart is being deployed **
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
kafka.default.svc.cluster.local
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092
kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092
kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092
The CLIENT listener for Kafka client connections from within your cluster have been configured with the following security settings:
- SASL authentication
To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="user1" \
password="$(kubectl get secret kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";
To create a pod that you can use as a Kafka client run the following commands:
kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.5.1-debian-11-r72 --namespace default --command -- sleep infinity
kubectl cp --namespace default /path/to/client.properties kafka-client:/tmp/client.properties
kubectl exec --tty -i kafka-client --namespace default -- bash
PRODUCER:
kafka-console-producer.sh \
--producer.config /tmp/client.properties \
--broker-list kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092 \
--topic test
CONSUMER:
kafka-console-consumer.sh \
--consumer.config /tmp/client.properties \
--bootstrap-server kafka.default.svc.cluster.local:9092 \
--topic test \
--from-beginning
Please check the client.properties file, this command will help you to create that file:
cat <<EOF >/tmp/client.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="user1" \
password="$(kubectl get secret kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";
EOF
Ok I know what was my issue, I was using a kafka-topics.sh command, in that case you have to speciify as a flag
--command-config /tmp/client.properties It worked perfectly with the normal producing, consuming...
Thank you @fmulero for your time
Thanks for sharing the solution in your case
Sure thing :) Thank you man.
Also, if you come across this issue, remember to use
--bootstrap-servers to specify the brokers for topics related commands
Name and Version
bitnami/kafka 25.1.10
What architecture are you using?
None
What steps will reproduce the bug?
Are you using any custom parameters or values?
I am using all the default values. I am not setting a password for the client, I am using the autogenerated one
What is the expected behavior?
I should be able to produce to a new topic
What do you see instead?
Additional information
No response