strimzi / strimzi-kafka-operator

Apache Kafka® running on Kubernetes
https://strimzi.io/
Apache License 2.0
4.78k stars 1.28k forks source link

listeners external type nodeport authentication scram-sha-512 #3829

Closed lanzhiwang closed 3 years ago

lanzhiwang commented 3 years ago
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    version: 2.5.0
    replicas: 3
    jmxOptions: {}
    listeners:
      external:
        type: nodeport
        tls: true
        authentication:
          type: scram-sha-512
    authorization:
      type: simple
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: '2.5'
    storage:
      type: ephemeral
  zookeeper:
    replicas: 3
    storage:
      type: ephemeral
  entityOperator:
    topicOperator: {}
    userOperator: {}

---

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
  name: my-user
  labels:
    strimzi.io/cluster: my-cluster
spec:
  authentication:
    type: scram-sha-512
  authorization:
    type: simple
    acls:
      - resource:
          type: topic
          name: my-topic
          patternType: literal
        operation: Read
        host: '*'
      - resource:
          type: topic
          name: my-topic
          patternType: literal
        operation: Describe
        host: '*'
      - resource:
          type: group
          name: my-group
          patternType: literal
        operation: Read
        host: '*'
      - resource:
          type: topic
          name: my-topic
          patternType: literal
        operation: Write
        host: '*'
      - resource:
          type: topic
          name: my-topic
          patternType: literal
        operation: Create
        host: '*'
      - resource:
          type: topic
          name: my-topic
          patternType: literal
        operation: Describe
        host: '*'

test result:

$ kubectl get node 10.0.128.237 -o=jsonpath='{range .status.addresses[*]}{.type}{"\t"}{.address}{"\n"}'
InternalIP  10.0.128.237
Hostname    10.0.128.237

$ kubectl -n kafka get service my-cluster-kafka-external-bootstrap
NAME                                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
my-cluster-kafka-external-bootstrap   NodePort    10.111.81.162    <none>        9094:30346/TCP               3m19s

$ kubectl -n kafka get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt

$ kubectl -n kafka get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 -d
123456789000

$ keytool -keystore user-truststore.jks -alias CARoot -import -file ca.crt
输入密钥库口令:
再次输入新口令:
是否信任此证书? [否]:  y
证书已添加到密钥库中

$ kubectl -n kafka get secret my-user -o jsonpath='{.data.password}' | base64 -d
qwertyuioasdfgh

$ cat << EOF > client.properties
security.protocol=SSL
ssl.truststore.location=./user-truststore.jks
ssl.truststore.password=123456789000

sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="my-user" \
    password="qwertyuioasdfgh";

security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
EOF

$ kafka-console-producer.sh --bootstrap-server 10.0.128.237:30346 --topic my-topic --producer.config ./client.properties
javax.net.ssl|FINE|01|main|2020-10-16 21:44:48.406 CST|SSLCipher.java:438|jdk.tls.keyLimits:  entry = AES/GCM/NoPadding KeyUpdate 2^37. AES/GCM/NOPADDING:KEYUPDATE = 137438953472
>javax.net.ssl|SEVERE|0B|kafka-producer-network-thread | console-producer|2020-10-16 21:44:49.044 CST|TransportContext.java:319|Fatal (CERTIFICATE_UNKNOWN): No subject alternative DNS name matching mw-m2 found. (
"throwable" : {
  java.security.cert.CertificateException: No subject alternative DNS name matching mw-m2 found.
    at sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:219)
    at sun.security.util.HostnameChecker.match(HostnameChecker.java:101)
    at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:441)
    at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:422)
    at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:282)
    at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:140)
    at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:624)
    at sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:465)
    at sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:361)
    at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:376)
    at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:451)
    at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:987)
    at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:974)
    at java.security.AccessController.doPrivileged(Native Method)
    at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:921)
    at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:425)
    at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:509)
    at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:363)
    at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:286)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:174)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549)
    at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:324)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
    at java.lang.Thread.run(Thread.java:748)}

)
javax.net.ssl|WARNING|0B|kafka-producer-network-thread | console-producer|2020-10-16 21:44:49.046 CST|SSLEngineOutputRecord.java:168|outbound has closed, ignore outbound application data
[2020-10-16 21:44:49,048] ERROR [Producer clientId=console-producer] Connection to node -1 (mw-m2/10.0.128.237:30346) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2020-10-16 21:44:49,048] WARN [Producer clientId=console-producer] Bootstrap broker 10.0.128.237:30346 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)

$ kafka-console-producer.sh --bootstrap-server mw-m2:30346 --topic my-topic --producer.config ./client.properties
javax.net.ssl|FINE|01|main|2020-10-16 21:50:24.385 CST|SSLCipher.java:438|jdk.tls.keyLimits:  entry = AES/GCM/NoPadding KeyUpdate 2^37. AES/GCM/NOPADDING:KEYUPDATE = 137438953472
>javax.net.ssl|SEVERE|0B|kafka-producer-network-thread | console-producer|2020-10-16 21:50:24.685 CST|TransportContext.java:319|Fatal (CERTIFICATE_UNKNOWN): No subject alternative DNS name matching mw-m2 found. (
"throwable" : {
  java.security.cert.CertificateException: No subject alternative DNS name matching mw-m2 found.
    at sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:219)
    at sun.security.util.HostnameChecker.match(HostnameChecker.java:101)
    at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:441)
    at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:422)
    at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:282)
    at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:140)
    at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:624)
    at sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:465)
    at sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:361)
    at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:376)
    at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:451)
    at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:987)
    at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:974)
    at java.security.AccessController.doPrivileged(Native Method)
    at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:921)
    at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:425)
    at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:509)
    at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:363)
    at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:286)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:174)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549)
    at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:324)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
    at java.lang.Thread.run(Thread.java:748)}

)
javax.net.ssl|WARNING|0B|kafka-producer-network-thread | console-producer|2020-10-16 21:50:24.688 CST|SSLEngineOutputRecord.java:168|outbound has closed, ignore outbound application data
[2020-10-16 21:50:24,689] ERROR [Producer clientId=console-producer] Connection to node -1 (mw-m2/10.0.128.237:30346) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2020-10-16 21:50:24,690] WARN [Producer clientId=console-producer] Bootstrap broker mw-m2:30346 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
scholzj commented 3 years ago

When using node ports, we currently don't support hostname verification (because in some cases the nodes change and we would need to regenerate the certs very often). So you will need to disable it by adding

ssl.endpoint.identification.algorithm=

to your client.properties file.

lanzhiwang commented 3 years ago

Thanks 👍 @scholzj

scholzj commented 3 years ago

Did it helped? Can we close this?

lanzhiwang commented 3 years ago

The problem has been solved according to your prompt,you can close this.