bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.93k stars 9.18k forks source link

[bitnami/kafka] Following kafka-client setup instructions after deployment fails for SASL setup #19827

Closed ETisREAL closed 12 months ago

ETisREAL commented 1 year ago

Name and Version

bitnami/kafka 25.1.10

What architecture are you using?

None

What steps will reproduce the bug?

  1. Deploy the chart (with SASL_PLAINTEXT as security protocol)
  2. Create a pod with the specified command, importing the cleint.properties file
  3. Try to run the kafka-producer.sh command

Are you using any custom parameters or values?

I am using all the default values. I am not setting a password for the client, I am using the autogenerated one

What is the expected behavior?

I should be able to produce to a new topic

What do you see instead?

Client side: 
[2023-10-06 16:18:04,856] ERROR [Producer clientId=console-producer] Connection to node -2 (k-kafka-controller-1.k-kafka-controller-headless.default.svc.cluster.local/10.64.1.167:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)
[2023-10-06 16:18:04,856] WARN [Producer clientId=console-producer] Bootstrap broker k-kafka-controller-1.k-kafka-controller-headless.default.svc.cluster.local:9092 (id: -2 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)

Broker side:
[2023-10-06 16:18:02,621] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /10.64.1.111 (channelId=10.64.1.166:9092-10.64.1.111:38968-0) (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector)

Additional information

No response

javsalgar commented 1 year ago

Hi,

Could you detail the commands you used for importing the client.properties file?

ETisREAL commented 1 year ago

Hi @javsalgar :)

Following the prompts after the installation (i.e.)

kubectl run k-kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.5.1-debian-11-r44 --namespace default --command -- sleep infinity
kubectl cp --namespace default /path/to/client.properties k-kafka-client:/tmp/client.properties
kubectl exec --tty -i k-kafka-client --namespace default -- bash

I created the client.properties file manually inside the client container:

echo "security.protocol=SASL_PLAINTEXT" > /tmp/client.properties
echo "sasl.mechanism=SCRAM-SHA-256" >> /tmp/client.properties
echo 'sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="user1" password="kwkDcW66i1";' >> /tmp/client.properties

please note, the password I specified in the sasl.jaas.config comes from the output of the same command specified above in the notes: kubectl get secret k-kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1

What I am doing wrong here? I am quite puzzled :/

pyildargit commented 1 year ago

same issue

fmulero commented 1 year ago

Hi,

I've just tested it with version 25.3.5 and I didn't face any issue. I've installed it using default values and I followed the instructions shown during the installation:

$ helm install kafka oci://registry-1.docker.io/bitnamicharts/kafka
AME: kafka
LAST DEPLOYED: Mon Oct 16 17:07:19 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 25.3.5
APP VERSION: 3.5.1

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka.default.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092
    kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092
    kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092

The CLIENT listener for Kafka client connections from within your cluster have been configured with the following security settings:
    - SASL authentication

To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.5.1-debian-11-r72 --namespace default --command -- sleep infinity
    kubectl cp --namespace default /path/to/client.properties kafka-client:/tmp/client.properties
    kubectl exec --tty -i kafka-client --namespace default -- bash

    PRODUCER:
        kafka-console-producer.sh \
            --producer.config /tmp/client.properties \
            --broker-list kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            --consumer.config /tmp/client.properties \
            --bootstrap-server kafka.default.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

Please check the client.properties file, this command will help you to create that file:

cat <<EOF >/tmp/client.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";
EOF
ETisREAL commented 12 months ago

Ok I know what was my issue, I was using a kafka-topics.sh command, in that case you have to speciify as a flag

--command-config /tmp/client.properties It worked perfectly with the normal producing, consuming...

Thank you @fmulero for your time

fmulero commented 12 months ago

Thanks for sharing the solution in your case

ETisREAL commented 12 months ago

Sure thing :) Thank you man.

Also, if you come across this issue, remember to use

--bootstrap-servers to specify the brokers for topics related commands