bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.93k stars 9.18k forks source link

[bitnami/kafka] Can not access service using TLS #21403

Closed poliphilson closed 10 months ago

poliphilson commented 10 months ago

Name and Version

bitnami/kafka:26.4.2

What architecture are you using?

amd64

What steps will reproduce the bug?

  1. create keystore and truststore using script.
    
    #!/bin/bash

set -e

openssl req -new -x509 -subj "/C=KR/ST=seoul/L=seoul/O=company/OU=unit/CN=ca" -keyout ca-key -out ca-cert -days 3650

keytool -noprompt -keystore ./kafka.truststore.jks -alias ca -import -file ca-cert -storepass password rm -f ca-cert

keytool -keystore ./kafka-broker-0.keystore.jks -alias broker-0 -dname "CN=test-kafka-broker-0.test-kafka-broker-headless.work.svc.cluster.local,OU=unit,O=company,L=seoul,S=seoul,C=KR" -validity 3650 -genkey -keyalg RSA -storepass password keytool -keystore ./kafka-broker-1.keystore.jks -alias broker-1 -dname "CN=test-kafka-broker-1.test-kafka-broker-headless.work.svc.cluster.local,OU=unit,O=company,L=seoul,S=seoul,C=KR" -validity 3650 -genkey -keyalg RSA -storepass password keytool -keystore ./kafka-broker-2.keystore.jks -alias broker-2 -dname "CN=test-kafka-broker-2.test-kafka-broker-headless.work.svc.cluster.local,OU=unit,O=company,L=seoul,S=seoul,C=KR" -validity 3650 -genkey -keyalg RSA -storepass password

keytool -keystore ./kafka-controller-0.keystore.jks -alias controller-0 -dname "CN=test-kafka-controller-0.test-kafka-broker-headless.work.svc.cluster.local,OU=unit,O=company,L=seoul,S=seoul,C=KR" -validity 3650 -genkey -keyalg RSA -storepass password keytool -keystore ./kafka-controller-1.keystore.jks -alias controller-1 -dname "CN=test-kafka-controller-1.test-kafka-broker-headless.work.svc.cluster.local,OU=unit,O=company,L=seoul,S=seoul,C=KR" -validity 3650 -genkey -keyalg RSA -storepass password keytool -keystore ./kafka-controller-2.keystore.jks -alias controller-2 -dname "CN=test-kafka-controller-2.test-kafka-broker-headless.work.svc.cluster.local,OU=unit,O=company,L=seoul,S=seoul,C=KR" -validity 3650 -genkey -keyalg RSA -storepass password

keytool -keystore ./kafka-broker-0.keystore.jks -alias broker-0 -certreq -file cert-file.broker-0 -storepass password keytool -keystore ./kafka-broker-1.keystore.jks -alias broker-1 -certreq -file cert-file.broker-1 -storepass password keytool -keystore ./kafka-broker-2.keystore.jks -alias broker-2 -certreq -file cert-file.broker-2 -storepass password keytool -keystore ./kafka-controller-0.keystore.jks -alias controller-0 -certreq -file cert-file.controller-0 -storepass password keytool -keystore ./kafka-controller-1.keystore.jks -alias controller-1 -certreq -file cert-file.controller-1 -storepass password keytool -keystore ./kafka-controller-2.keystore.jks -alias controller-2 -certreq -file cert-file.controller-2 -storepass password

keytool -noprompt -keystore ./kafka.truststore.jks -export -alias ca -rfc -file ca-cert -storepass password

openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file.broker-0 -out cert-signed.broker-0 -days 3650 -CAcreateserial openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file.broker-1 -out cert-signed.broker-1 -days 3650 -CAcreateserial openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file.broker-2 -out cert-signed.broker-2 -days 3650 -CAcreateserial openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file.controller-0 -out cert-signed.controller-0 -days 3650 -CAcreateserial openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file.controller-1 -out cert-signed.controller-1 -days 3650 -CAcreateserial openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file.controller-2 -out cert-signed.controller-2 -days 3650 -CAcreateserial

keytool -noprompt -keystore ./kafka-broker-0.keystore.jks -alias ca -import -file ca-cert -storepass password keytool -noprompt -keystore ./kafka-broker-1.keystore.jks -alias ca -import -file ca-cert -storepass password keytool -noprompt -keystore ./kafka-broker-2.keystore.jks -alias ca -import -file ca-cert -storepass password keytool -noprompt -keystore ./kafka-controller-0.keystore.jks -alias ca -import -file ca-cert -storepass password keytool -noprompt -keystore ./kafka-controller-1.keystore.jks -alias ca -import -file ca-cert -storepass password keytool -noprompt -keystore ./kafka-controller-2.keystore.jks -alias ca -import -file ca-cert -storepass password

keytool -noprompt -keystore ./kafka-broker-0.keystore.jks -alias broker-0 -import -file cert-signed.broker-0 -storepass password keytool -noprompt -keystore ./kafka-broker-1.keystore.jks -alias broker-1 -import -file cert-signed.broker-1 -storepass password keytool -noprompt -keystore ./kafka-broker-2.keystore.jks -alias broker-2 -import -file cert-signed.broker-2 -storepass password keytool -noprompt -keystore ./kafka-controller-0.keystore.jks -alias controller-0 -import -file cert-signed.controller-0 -storepass password keytool -noprompt -keystore ./kafka-controller-1.keystore.jks -alias controller-1 -import -file cert-signed.controller-1 -storepass password keytool -noprompt -keystore ./kafka-controller-2.keystore.jks -alias controller-2 -import -file cert-signed.controller-2 -storepass password

2. create secret

kubectl create secret generic kafka-jks -n work \ --from-file=kafka-broker-0.keystore.jks=./kafka-broker-0.keystore.jks \ --from-file=kafka-broker-1.keystore.jks=./kafka-broker-1.keystore.jks \ --from-file=kafka-broker-2.keystore.jks=./kafka-broker-2.keystore.jks \ --from-file=kafka.truststore.jks=./kafka.truststore.jks \ --from-file=kafka-controller-0.keystore.jks=./kafka-controller-0.keystore.jks \ --from-file=kafka-controller-1.keystore.jks=./kafka-controller-1.keystore.jks \ --from-file=kafka-controller-2.keystore.jks=./kafka-controller-2.keystore.jks

3. deploy kafka

helm install -n work -f values.yaml test ./kafka

4. create client.properties

security.protocol=SASL_SSL sasl.mechanism=SCRAM-SHA-256 sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \ username="user1" \ password="I_am_user1"; ssl.truststore.type=JKS ssl.truststore.location=/tmp/kafka.truststore.jks ssl.truststore.password=password ssl.keystore.type=JKS ssl.keystore.location=/tmp/kafka.keystore.jks ssl.keystore.password=password

5. deploy kafka client

kubectl run test-kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.6.0-debian-11-r2 --namespace work --command -- sleep infinity

6. copy files 

kubectl cp --namespace work ./client.properties test-kafka-client:/tmp/client.properties kubectl cp --namespace work ./kafka.truststore.jks test-kafka-client:/tmp/kafka.truststore.jks kubectl cp --namespace work ./kafka-broker-0.keystore.jks test-kafka-client:/tmp/kafka.keystore.jks

7. enter the kafka client

kubectl exec --tty -i test-kafka-client --namespace work -- bash

8. run command

cd /tmp kafka-topics.sh --bootstrap-server test-kafka.work.svc.cluster.local:9092 --list --command-config ./client.properties


### Are you using any custom parameters or values?

```yaml
listeners:
  client:
    containerPort: 9092
    protocol: SASL_SSL
    name: CLIENT
    sslClientAuth: "required"

  external:
    containerPort: 9095
    protocol: SASL_SSL
    name: EXTERNAL
    sslClientAuth: ""

  interbroker:
    containerPort: 9093
    protocol: SASL_SSL
    name: INTERNAL
    sslClientAuth: ""

  controller:
    containerPort: 9094
    protocol: SASL_PLAINTEXT
    name: CONTROLLER
    sslClientAuth: ""

sasl:
  interBrokerMechanism: PLAIN
  controllerMechanism: PLAIN
  interbroker:
    user: broker
    password: "I_am_br0ker"
  controller:
    user: controller
    password: "I_am_c0ntr011er"
  client:
    users:
    - user1
    passwords: "I_am_user1"

tls:
  type: JKS
  existingSecret: "kafka-jks"
  keystorePassword: "password"
  truststorePassword: "password"
  jksKeystoreKey: kafka.keystore.jks
  jksTruststoreKey: kafka.truststore.jks

controller:
  replicaCount: 3
  controllerOnly: true
  resources:
    limits: {}
    requests: {}
  persistence:
    enabled: false

broker:
  replicaCount: 3
  resources:
    limits: {}
    requests: {}
  persistence:
    enabled: false

service:
  type: ClusterIP
  ports:
    client: 9092
    controller: 9094
    interbroker: 9093
    external: 9095

kraft:
  enabled: true

extraConfig: |
  auto.create.topics.enable: false

What is the expected behavior?

To fetch the list of topics.

What do you see instead?

[2023-12-05 09:18:12,681] ERROR [AdminClient clientId=adminclient-1] Connection to node -1 (test-kafka.work.svc.cluster.local/10.100.174.69:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2023-12-05 09:18:12,682] WARN [AdminClient clientId=adminclient-1] Metadata update failed due to authentication error (org.apache.kafka.clients.admin.internals.AdminMetadataManager)
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: No name matching test-kafka.work.svc.cluster.local found
    at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
    at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:378)
    at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321)
    at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:316)
    at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1357)
    at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1232)
    at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1175)
    at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:396)
    at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:480)
    at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1277)
    at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1264)
    at java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
    at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1209)
    at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:435)
    at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:523)
    at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:373)
    at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:293)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:178)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:571)
    at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1381)
    at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1312)
    at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: java.security.cert.CertificateException: No name matching test-kafka.work.svc.cluster.local found
    at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:234)
    at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:103)
    at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:458)
    at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:418)
    at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:292)
    at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:144)
    at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1335)
    ... 19 more
Error while executing topic command : SSL handshake failed

Additional information

But this command works very well

kafka-topics.sh --bootstrap-server test-kafka-broker-0.test-kafka-broker-headless.work.svc.cluster.local:9092 --list --command-config ./client.properties

or

kafka-topics.sh --bootstrap-server test-kafka-broker-1.test-kafka-broker-headless.work.svc.cluster.local:9092 --list --command-config ./client.properties

or

kafka-topics.sh --bootstrap-server test-kafka-broker-2.test-kafka-broker-headless.work.svc.cluster.local:9092 --list --command-config ./client.properties
rafariossaa commented 10 months ago

Hi, In step 1 you are creating certs for CNs named test-kafka-controller-X, and in the step 8 you are connecting to the SVC test-kafka.work.svc.cluster.local. The SVC balanced the traffic on controller pods, so it expect the certificate to have a CN test-kafka.... and what is getting is test-kafka-controller-X. You would need to create that cert and use it in all the controller nodes.

poliphilson commented 10 months ago

Hi, In step 1 you are creating certs for CNs named test-kafka-controller-X, and in the step 8 you are connecting to the SVC test-kafka.work.svc.cluster.local. The SVC balanced the traffic on controller pods, so it expect the certificate to have a CN test-kafka.... and what is getting is test-kafka-controller-X. You would need to create that cert and use it in all the controller nodes.

Hi, Does this mean I need to generate a test-kafka.work.svc.cluster.local certificate instead of an test-kafka-controller-Xcertificate? like this

keytool -keystore ./kafka-controller-0.keystore.jks \
        -alias controller-0 \
        -dname "CN=test-kafka.work.svc.cluster.local,OU=unit,O=company,L=seoul,S=seoul,C=KR" \
        -validity 3650 \
        -genkey \
        -keyalg RSA \
        -storepass password
poliphilson commented 10 months ago

okay. This makes work properly.

I just edited script in step 1.

openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file.broker-0 -out cert-signed.broker-0 -days 3650 -CAcreateserial -extensions v3_req -extfile broker-0.conf
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file.broker-1 -out cert-signed.broker-1 -days 3650 -CAcreateserial -extensions v3_req -extfile broker-1.conf
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file.broker-2 -out cert-signed.broker-2 -days 3650 -CAcreateserial -extensions v3_req -extfile broker-2.conf

Before running the script, create a new file as shown below.

#broker-0.conf

[ v3_req ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = test-kafka.work.svc.cluster.local
DNS.2 = test-kafka-broker-0.test-kafka-broker-headless.work.svc.cluster.local
#broker-1.conf

[ v3_req ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = test-kafka.work.svc.cluster.local
DNS.2 = test-kafka-broker-1.test-kafka-broker-headless.work.svc.cluster.local
#broker-2.conf

[ v3_req ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = test-kafka.work.svc.cluster.local
DNS.2 = test-kafka-broker-2.test-kafka-broker-headless.work.svc.cluster.local
dabai-yaoliang commented 4 months ago
kafka 03:20:49.88 INFO  ==> Formatting storage directories to add metadata...
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: If process.roles contains just the 'broker' role, the node id 0 must not be included in the set of voters controller.quorum.voters=Set(0, 1, 2)
        at scala.Predef$.require(Predef.scala:281)
        at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:2379)
        at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:2290)
        at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1638)
        at kafka.tools.StorageTool$.$anonfun$main$1(StorageTool.scala:52)
        at scala.Option.flatMap(Option.scala:271)
        at kafka.tools.StorageTool$.main(StorageTool.scala:52)
        at kafka.tools.StorageTool.main(StorageTool.scala)

Hello, I installed your document and configured SSL. I also enabled the broker node, but now the broker node reports an error. Could you please tell me what the problem is?