bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
9.05k stars 9.22k forks source link

[bitnami/kafka] kafka_jaas.conf not being generated #15133

Closed djetelina closed 1 year ago

djetelina commented 1 year ago

Name and Version

bitnami/kafka

What steps will reproduce the bug?

  1. Install the chart with the values below
  2. It fails because kafak_jaas.conf is not being generated

Are you using any custom parameters or values?

authorizerClassName: kafka.security.authorizer.AclAuthorizer
allowEveryoneIfNoAclFound: false
superUsers:
  - User:admin
auth:
  clientProtocol: sasl
  interBrokerProtocol: sasl_tls
  tls:
    existingSecrets:
      - kafka-jks
    password: kafkaPassword
  sasl:
#    mechanisms: plain,scram-sha-256,scram-sha-512
#    interBrokerMechanism: plain
    jaas:
      clientUsers:
        - brokerUser
      clientPasswords:
        - brokerPassword
      zookeeperUser: "zookeeperUser"
      zookeeperPassword: "zookeeperPassword"

What do you see instead?

kafka_jaas.conf not found

Additional information

This all happens, because the jaas generation in the image entrypoint here is not being called due to server.properties already existing. It doesn't pass the condition above (if [[ ! -f "$KAFKA_BASE_DIR"/conf/server.properties ]] && [[ ! -f "$KAFKA_MOUNTED_CONF_DIR"/server.properties ]]; then).

Perhaps this is a "bug" with the docker image, but it's all quite complex, therefore this is as far as I'm able to get with my investigation.

Mauraza commented 1 year ago

Hi @djetelina,

When you try to install the chart appears the following information?

You need to configure your Kafka client to access using SASL authentication. To do so, you need to create the 'kafka_jaas.conf' and 'client.properties' configuration files with the content below:

    - kafka_jaas.conf:

The version I'm using are chart version: 21.0.1 and app version: 3.4.0

djetelina commented 1 year ago

As we're running helm from terraform, this output is somewhere in the void, but that would explain a lot 🤦

Mauraza commented 1 year ago

All message are:

You need to configure your Kafka client to access using SASL authentication. To do so, you need to create the 'kafka_jaas.conf' and 'client.properties' configuration files with the content below:

    - kafka_jaas.conf:

KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="brokerUser"
password="$(kubectl get secret kafka-jaas --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";
};

    - client.properties:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.4.0-debian-11-r2 --namespace default --command -- sleep infinity
    kubectl cp --namespace default /path/to/client.properties kafka-client:/tmp/client.properties
    kubectl cp --namespace default /path/to/kafka_jaas.conf kafka-client:/tmp/kafka_jaas.conf
    kubectl exec --tty -i kafka-client --namespace default -- bash
    export KAFKA_OPTS="-Djava.security.auth.login.config=/tmp/kafka_jaas.conf"

    PRODUCER:
        kafka-console-producer.sh \
            --producer.config /tmp/client.properties \
            --broker-list kafka-0.kafka-headless.default.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            --consumer.config /tmp/client.properties \
            --bootstrap-server kafka.default.svc.cluster.local:9092 \
            --topic test \
            --from-beginning
Hy3n4 commented 1 year ago

Hi,

but this applies to client configuration, right? The problem is, that when we enable SASL, the Kafka cluster won't come up because it has no kafka_jaas.conf created by default. In my opinion, the message provided at the end of helm chart installation is not relevant at this point, since it describes how to connect to an existing cluster.

So here is my question. Should I create this one in secret or configMap and mount it as extraVolume? Or should it be generated automatically and something is not working as expected? 🤔

Thanks

Mauraza commented 1 year ago

Hi @Hy3n4,

Could you let us know what happens if you run the command describe for the pod?

kubectl describe pod kafka-0

Where appears the error that kafka_jaas.conf doesn't exist?

Hy3n4 commented 1 year ago

Hi @Mauraza,

It is logged when the Pod is starting up.

kafka 07:19:19.81 
kafka 07:19:19.82 Welcome to the Bitnami kafka container
kafka 07:19:19.82 Subscribe to project updates by watching https://github.com/bitnami/containers
kafka 07:19:19.82 Submit issues and feature requests at https://github.com/bitnami/containers/issues
kafka 07:19:19.83 
kafka 07:19:19.83 INFO  ==> ** Starting Kafka setup **
kafka 07:19:19.89 DEBUG ==> Validating settings in KAFKA_* env vars...
kafka 07:19:19.91 WARN  ==> You set the environment variable ALLOW_PLAINTEXT_LISTENER=yes. For safety reasons, do not use this flag in a production environment.
kafka 07:19:19.93 INFO  ==> Initializing Kafka...
kafka 07:19:19.94 INFO  ==> ** Kafka setup finished! **

kafka 07:19:19.97 INFO  ==> ** Starting Kafka **
[2023-02-28 07:19:21,139] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2023-02-28 07:19:21,687] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2023-02-28 07:19:21,812] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
org.apache.kafka.common.KafkaException: Exception while loading Zookeeper JAAS login context [java.security.auth.login.config=/opt/bitnami/kafka/config/kafka_jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client]
    at org.apache.kafka.common.security.JaasUtils.isZkSaslEnabled(JaasUtils.java:67)
    at kafka.server.KafkaServer$.zkClientConfigFromKafkaConfig(KafkaServer.scala:80)
    at kafka.server.KafkaServer.<init>(KafkaServer.scala:150)
    at kafka.Kafka$.buildServer(Kafka.scala:73)
    at kafka.Kafka$.main(Kafka.scala:87)
    at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.SecurityException: java.io.IOException: HERE >>> /opt/bitnami/kafka/config/kafka_jaas.conf (No such file or directory) <<< HERE
    at java.base/sun.security.provider.ConfigFile$Spi.<init>(ConfigFile.java:137)
    at java.base/sun.security.provider.ConfigFile.<init>(ConfigFile.java:102)
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
    at java.base/java.lang.Class.newInstance(Class.java:584)
    at java.base/javax.security.auth.login.Configuration$2.run(Configuration.java:255)
    at java.base/javax.security.auth.login.Configuration$2.run(Configuration.java:246)
    at java.base/java.security.AccessController.doPrivileged(Native Method)
    at java.base/javax.security.auth.login.Configuration.getConfiguration(Configuration.java:245)
    at org.apache.kafka.common.security.JaasUtils.isZkSaslEnabled(JaasUtils.java:63)
    ... 5 more
Caused by: java.io.IOException: AND HERE >>> /opt/bitnami/kafka/config/kafka_jaas.conf (No such file or directory) <<< HERE
    at java.base/sun.security.provider.ConfigFile$Spi.ioException(ConfigFile.java:665)
    at java.base/sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:262)
    at java.base/sun.security.provider.ConfigFile$Spi.<init>(ConfigFile.java:135)
    ... 16 more

Therefore my question if it shuld be generated automatically based on values in values.yaml.

Here I attach zookeeper values for the context:

zookeeper:
  enabled: true
  replicaCount: 3
  auth:
    client:
      enabled: true
      clientUser: "zookeeperUser"
      clientPassword: "zookeeperPassword"
      serverUsers: "zookeeperUser"
      serverPassswords: "zookeeperPassword"

Thanks for support

Mauraza commented 1 year ago

Hi @djetelina,

I'm confused about the values this is your values ⬇️ ?

authorizerClassName: kafka.security.authorizer.AclAuthorizer
allowEveryoneIfNoAclFound: false
superUsers:
  - User:admin
auth:
  clientProtocol: sasl
  interBrokerProtocol: sasl_tls
  tls:
    existingSecrets:
      - kafka-jks
    password: kafkaPassword
  sasl:
#    mechanisms: plain,scram-sha-256,scram-sha-512
#    interBrokerMechanism: plain
    jaas:
      clientUsers:
        - brokerUser
      clientPasswords:
        - brokerPassword
      zookeeperUser: "zookeeperUser"
      zookeeperPassword: "zookeeperPassword"
zookeeper:
  enabled: true
  replicaCount: 3
  auth:
    client:
      enabled: true
      clientUser: "zookeeperUser"
      clientPassword: "zookeeperPassword"
      serverUsers: "zookeeperUser"
      serverPassswords: "zookeeperPassword"
Hy3n4 commented 1 year ago

That is correct with one more addition:

config: |-
  zookeeper.connect=kafka-cluster-zookeeper
  authorizer.class.name=kafka.security.authorizer.AclAuthorizer
authorizerClassName: kafka.security.authorizer.AclAuthorizer
allowEveryoneIfNoAclFound: false
superUsers:
  - User:admin
auth:
  clientProtocol: sasl
  interBrokerProtocol: sasl_tls
  tls:
    existingSecrets:
      - kafka-jks
    password: kafkaPassword
  sasl:
#    mechanisms: plain,scram-sha-256,scram-sha-512
#    interBrokerMechanism: plain
    jaas:
      clientUsers:
        - brokerUser
      clientPasswords:
        - brokerPassword
      zookeeperUser: "zookeeperUser"
      zookeeperPassword: "zookeeperPassword"
zookeeper:
  enabled: true
  replicaCount: 3
  auth:
    client:
      enabled: true
      clientUser: "zookeeperUser"
      clientPassword: "zookeeperPassword"
      serverUsers: "zookeeperUser"
      serverPassswords: "zookeeperPassword"

If you are wondering why there is authorizerClassName defined twice, it's because it didn't work without the one defined in config:. Also what confuses me is zookeeper.connect in config: because if I don't specify it, it won't connect complaining it has missed these values in the config file.

We also took a deeper look into Kafka image and there is a script rootfs/opt/bitnami/scripts/libkafka.sh where is a function kafka_generate_jaas_authentication_file() which should run if no server.properties are defined. Maybe this could be the problem, because I have to specify zookeeper.connect in order to successfully start the Pod. If I'm not mistaken this file is created when I specify connect: block in Chart values, right?

Mauraza commented 1 year ago

Hi @djetelina,

I think your case is related to this https://github.com/bitnami/containers/issues/23077 we already have an internal task to investigate it.

github-actions[bot] commented 1 year ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

AceRogue commented 1 year ago

I have the same problem too. Is this problem fixed?

Mauraza commented 1 year ago

Hi @AceRogue,

We have a task about this. We will update this issue when we have more information.

Mauraza commented 1 year ago

Hi @djetelina and @AceRogue

The issue in the container was solved could you check if your error is fixed too?

github-actions[bot] commented 1 year ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

rafariossaa commented 1 year ago

Hi, Did you have the chance to check it after the fixing in the container ?

djetelina commented 1 year ago

Hey, we ended up not using bitnami kafka chart. Sorry, I can't report back whether it's fixed.

Aman774 commented 1 year ago

Getting the below message after installing kafka using helm chart.

The CLIENT listener for Kafka client connections from within your cluster have been configured with the following security settings:
    - SASL authentication

To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-dev-user-passwords --namespace kafka-poc -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";

can any guide where and how to create this client.properties file.

Thanks in advance.