bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.93k stars 9.18k forks source link

Client can't connect Kafka Broker by sasl and tls #27844

Closed Trourest186 closed 2 months ago

Trourest186 commented 3 months ago

Name and Version

bitnami/kafka 23.0.7

What architecture are you using?

amd64

What steps will reproduce the bug?

Recently, I have configured Kafka security with config below, i want to setup SASL and TLS (jks format). All broker and zookeeper is running. And, i followed the output then install helm and it appear this errors in image that i sent, i can't fix it. Hope to receive support of everyone!

Logs kafka broker

[2024-07-08 17:18:41,925] INFO [Controller id=1, targetBrokerId=0] Client requested connection close from node 0 (org.apache.kafka.clients.NetworkClient)
[2024-07-08 17:18:41,947] INFO [Controller id=1, targetBrokerId=1] Failed authentication with kafka-1.kafka-headless.test.svc.cluster.local/10.244.103.228 (channelId=1) (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-07-08 17:18:41,947] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2024-07-08 17:18:41,947] ERROR [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka-1.kafka-headless.test.svc.cluster.local/10.244.103.228:9094) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2024-07-08 17:18:41,947] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Failed authentication with /10.244.103.228 (channelId=10.244.103.228:9094-10.244.103.228:34486-895) (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-07-08 17:18:41,947] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka-1.kafka-headless.test.svc.cluster.local:9094 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: No name matching kafka-1.kafka-headless.test.svc.cluster.local found

Are you using any custom parameters or values?

replicaCount: 2

authorizerClassName: kafka.security.authorizer.AclAuthorizer

resources:
  limits:
    cpu: 1000m
    memory: 2000Mi
  requests:
    cpu: 1000m
    memory: 2000Mi

auth:
  clientProtocol: sasl # Based on listeners.client.protocol=SASL
  externalClientProtocol: "" # Optional, typically left blank
  interBrokerProtocol: sasl_tls # Based on listeners.interbroker.protocol=SASL_TLS
  controllerProtocol: plaintext # Leave as plaintext if not using Kraft mode

  sasl:
    mechanisms: plain,scram-sha-256,scram-sha-512 
    interBrokerMechanism: plain 
    jaas:
      clientUsers: 
        - brokerUser # From sasl.client.users[0]
      clientPasswords:
        - brokerPassword # From sasl.client.passwords[0]
      interBrokerUser: "" # Not explicitly specified in your config, can be left blank
      interBrokerPassword: ""
      zookeeperUser: zookeeperUser # From sasl.zookeeper.user
      zookeeperPassword: zookeeperPassword # From sasl.zookeeper.password
      existingSecret: "" # Not directly used, but could be for combined creds

  tls:
    type: jks # Based on tls.existingSecret
    pemChainIncluded: false # Usually false for JKS
    existingSecrets: 
      - kafka-jks-0 # From tls.existingSecret
      - kafka-jks-1
    autoGenerated: false
    password: jksPassword # From tls.password
    existingSecret: "" # For password, but you're using direct `password`
    jksTruststoreSecret: ""
    jksKeystoreSAN: ""
    jksTruststore: ""
    endpointIdentificationAlgorithm: https

  zookeeper:
    tls:
      enabled: false # Based on zookeeper.auth.enabled=true
      type: jks
      verifyHostname: true
      existingSecret: "" # Not directly specified, would need a separate secret for ZK TLS
      existingSecretKeystoreKey: ""
      existingSecretTruststoreKey: ""
      passwordsSecret: ""
      passwordsSecretKeystoreKey: ""
      passwordsSecretTruststoreKey: ""

externalAccess:
  enabled: true
  autoDiscovery:
    enabled: true
  service:
    type: NodePort
    port: 9094
    loadBalancerIPs: []
    loadBalancerSourceRanges: []
    nodePorts:
      - 31444
      - 31433
    useHostIPs: false
    domain: 
    annotations: {}

kraft:
  enabled: false

zookeeper:
  enabled: true
  replicaCount: 1
  persistence:
    enabled: true
    size: 5Gi

tolerations:
- key: "node-role.kubernetes.io/control-plane"
  operator: "Exists"
  effect: "NoSchedule"

nodeSelector:
  "kubernetes.io/hostname": 

rbac:
  create: true

What is the expected behavior?

The client can connect to all brokers and push, pull topic

What do you see instead?

No

carrodher commented 3 months ago

The issue may not be directly related to the Bitnami container image/Helm chart, but rather to how the application is being utilized, configured in your specific environment, or tied to a specific scenario that is not easy to reproduce on our side.

If you think that's not the case and are interested in contributing a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.

Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.

Suppose you have any questions about the application, customizing its content, or technology and infrastructure usage. In that case, we highly recommend that you refer to the forums and user guides provided by the project responsible for the application or technology.

With that said, we'll keep this ticket open until the stale bot automatically closes it, in case someone from the community contributes valuable insights.

github-actions[bot] commented 2 months ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 2 months ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.