bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
9.06k stars 9.23k forks source link

Invalid Credentials with NodePort #30622

Open rshap91 opened 19 hours ago

rshap91 commented 19 hours ago

Name and Version

bitnami/kafka 31.0.0

What architecture are you using?

arm64

What steps will reproduce the bug?

  1. Running locally on apple macbook pro M1 with kubernetes on docker desktop.
    
    $ kubectl version
    Client Version: v1.29.2
    Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
    Server Version: v1.29.2

$ docker version Client: Version: 27.0.3 API version: 1.46 Go version: go1.21.11 Git commit: 7d4bcd8 Built: Fri Jun 28 23:59:41 2024 OS/Arch: darwin/arm64 Context: desktop-linux

Server: Docker Desktop 4.32.0 (157355) Engine: Version: 27.0.3 API version: 1.46 (minimum version 1.24) Go version: go1.21.11 Git commit: 662f78c Built: Sat Jun 29 00:02:44 2024 OS/Arch: linux/arm64 Experimental: false containerd: Version: 1.7.18 GitCommit: ae71819c4f5e67bb4d5ae76a6b735f29cc25774e runc: Version: 1.7.18 GitCommit: v1.1.13-0-g58aa920 docker-init: Version: 0.19.0 GitCommit: de40ad0


2. Created file local.yaml:

```yaml
externalAccess:
  enabled: true
  controller:
    service:
      type: NodePort
      useHostIPs: true
      nodePorts:
        - 30001
        - 30002
        - 30003
  1. To reproduce:
    >> helm repo add bitnami https://charts.bitnami.com/bitnami
    >> helm pull --untar bitnami/kafka
    >> cd kafka

    move the local.yaml file to ./values/local.yaml

Run helm install -f values/local.yaml kafka .

The following message was printed to stdout

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka.default.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092
    kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092
    kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092

The CLIENT listener for Kafka client connections from within your cluster have been configured with the following security settings:
    - SASL authentication

To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.9.0-debian-12-r1 --namespace default --command -- sleep infinity
    kubectl cp --namespace default /path/to/client.properties kafka-client:/tmp/client.properties
    kubectl exec --tty -i kafka-client --namespace default -- bash

    PRODUCER:
        kafka-console-producer.sh \
            --producer.config /tmp/client.properties \
            --bootstrap-server kafka.default.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            --consumer.config /tmp/client.properties \
            --bootstrap-server kafka.default.svc.cluster.local:9092 \
            --topic test \
            --from-beginning
...
  1. following the above steps I run
>> cat << EOF > client.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";
EOF

>> kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.9.0-debian-12-r1 --namespace default --command -- sleep infinity

>> kubectl cp --namespace default client.properties kafka-client:/tmp/client.properties

>> kubectl exec --tty -i kafka-client --namespace default -- bash

>> kafka-console-consumer.sh \
            --consumer.config /tmp/client.properties \
            --bootstrap-server kafka.default.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

This returns an authentication error.

Are you using any custom parameters or values?

externalAccess:
  enabled: true
  controller:
    service:
      type: NodePort
      useHostIPs: true
      nodePorts:
        - 30001
        - 30002
        - 30003

What is the expected behavior?

I expect it to successfully connect to the kafka cluster as a consumer

What do you see instead?

ERROR [Consumer clientId=console-consumer, groupId=console-consumer-91963] Connection to node -1 (kafka.default.svc.cluster.local/10.98.182.229:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)
[2024-11-25 18:47:13,287] WARN [Consumer clientId=console-consumer, groupId=console-consumer-91963] Bootstrap broker kafka.default.svc.cluster.local:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2024-11-25 18:47:13,288] ERROR Error processing message, terminating consumer process:  (org.apache.kafka.tools.consumer.ConsoleConsumer)
org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256
Processed a total of 0 messages

Additional information

No response

rshap91 commented 18 hours ago

It does seem to work when I use a Sassl Plain mechanism. It appears the scram password is not being set for the user.

When I update the config entry for the user it works:

  1. change client.properties to use SASL PLAIN
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
    username="user1" \
    password="ABC123";
  1. change scram password for user1

/usr/local/kafka/bin/kafka-configs.sh --bootstrap-server 127.0.0.1:30001 --command-config /usr/local/kafka/config/client.properties --alter --entity-type users --entity-name user1 --add-config 'SCRAM-SHA-256=[password=ABC123]

  1. change client properties back to SCRAM
    security.protocol=SASL_PLAINTEXT
    sasl.mechanism=SCRAM-SHA-256
    sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="rxyCBCiyr6"

Now authentication succeeds

carrodher commented 3 hours ago

Thank you for bringing this issue to our attention. We appreciate your involvement! If you're interested in contributing a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.

Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.