Closed cuongtruonghcm closed 4 years ago
Really hard to say without more details.
Do you have the configuration properties of your Kafka cluster ?
Below is one configuation of 3 Broker nodes.
listeners=SASL_SSL://:9092
delete.topic.enable=true zookeeper.connect=ip-10-xxx-xxx-xx.ap-southeast-1.compute.internal:2181,ip-10-xxx-xxx-xx.ap-southeast-1.compute.internal:2181,ip-10-xxx-xxx-xxx.ap-southeast-1.compute.internal:2181 log.dirs=/var/lib/kafka/data broker.id=2 log.segment.bytes=1073741824 socket.receive.buffer.bytes=102400 socket.send.buffer.bytes=102400 confluent.metrics.reporter.topic.replicas=3 num.network.threads=8 ssl.endpoint.identification.algorithm= num.io.threads=16 confluent.metrics.reporter.ssl.endpoint.identification.algorithm= transaction.state.log.min.isr=2 zookeeper.connection.timeout.ms=15000 offsets.topic.replication.factor=3 socket.request.max.bytes=104857600 log.retention.check.interval.ms=300000 group.initial.rebalance.delay.ms=0 metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter num.recovery.threads.per.data.dir=2 transaction.state.log.replication.factor=3 confluent.metrics.reporter.bootstrap.servers=ip-10-xxx-xxx-xxx.ap-southeast-1.compute.internal:9092 log.retention.hours=168 num.partitions=1 metadata.request.timeout.ms=60000 authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer super.users=User:client allow.everyone.if.no.acl.found=true
security.inter.broker.protocol=SASL_SSL confluent.metrics.reporter.security.protocol=SASL_SSL
ssl.truststore.location=/var/ssl/private/client.truststore.jks ssl.truststore.password=xxx ssl.keystore.location=/var/ssl/private/client.keystore.jks ssl.keystore.password=xxx ssl.key.password=xxx confluent.metrics.reporter.ssl.truststore.location=/var/ssl/private/client.truststore.jks confluent.metrics.reporter.ssl.truststore.password=xxx confluent.metrics.reporter.ssl.keystore.location=/var/ssl/private/client.keystore.jks confluent.metrics.reporter.ssl.keystore.password=xxx confluent.metrics.reporter.ssl.key.password=xxx
listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="broker" password="xxx" user_broker="xxx" user_client="xxx" sasl.enabled.mechanisms=PLAIN sasl.mechanism.inter.broker.protocol=PLAIN confluent.metrics.reporter.sasl.mechanism=PLAIN confluent.metrics.reporter.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="client" password="xxx";
It may about function "sun.security.util.HostnameChecker.matchDNS" How to disable HostnameCheck while using KafkaHQ?
Caused by: java.security.cert.CertificateException: No name matching ip-10-xxx-xxx-xxx.ap-southeast-1.compute.internal found at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:225) at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:98) at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:459) at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:434) at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:291) at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141) at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:620)
As I see, it's not possible to disabled hostname checker (far away inside code). But seems to be really strange that you have to ? Why not simply use a valid certificate (valid hostname, can be selfsigned as I know) ?
I used to apache kafka to test the Kafka-cluster, "kafka_2.12-2.2.0\bin\kafka-console-consumer.sh" and added into configuration a line: ssl.endpoint.identification.algorithm= In document, this line will tell kafka-console-consumer disable ssl hostname checked. It worked!
So my question is the KafkaHQ can do like this?
As I know, yes.
All options to kafka-console-consumer.sh is passed to Java client , like KafkaHQ in the properties
configuration.
So if you found configuration that works on console consumer, just passed then on KafkaHQ configuration files
I set ssl.endpoint.identification.algorithm : "" in application.yml to avoid that error
connections: cluster-ssl-sasl: properties: bootstrap.servers: "123.123.123.123:9002" security.protocol: SASL_SSL sasl.mechanism: SCRAM-SHA-256 ssl.endpoint.identification.algorithm : "" sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="AAA" password="AAA"; ssl.truststore.location: /d...g/server.truststore.jks ssl.truststore.password: ttt ssl.keystore.location: /d...g/server.keystore.jks ssl.keystore.password: ttt ssl.key.password: ttt
Error: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Kafka client (java) also got this error but can resolve use properties ssl.endpoint.identification.algorith=
I used to this properties in application config file but still got this error. Could u pls help me?