Closed GaneshbabuRamamoorthy closed 3 years ago
Hi @GaneshbabuRamamoorthy ,
That error appears if the certificate was not issued for the correct hostname.
Setting auth.tls.endpointIdentificationAlgorithm
to an empty string that should no happen. I see that it seems you set that properly. I would need some time to reproduce the issue in order to verify if its a chart issue or an issue with your certificates.
Hi @GaneshbabuRamamoorthy ,
How did you call the certificates inside the tls/files
folder?
Have in mind they will need to be called like kafka.truststore.jks
, kafka-0.keystore.jks
, kafka-1.keystore.jks
, etc
Hi @miguelaeh Yes I have kept the files inside the folder [/kafka/files/tls] like this,
[root@k8master1 tls]# ls -tlr
total 24
-rw-r--r-- 1 root root 1149 Sep 7 10:17 README.md
-rw-r--r-- 1 root root 1154 Sep 13 15:23 kafka.truststore.jks
-rw-r--r-- 1 root root 4548 Sep 13 15:23 kafka-1.keystore.jks
-rw-r--r-- 1 root root 4548 Sep 13 15:23 kafka-0.keystore.jks
[root@k8master1 tls]#
[root@k8master1 tls]#
[root@k8master1 tls]# pwd
/root/ganesh/charts-ssl/bitnami/kafka/files/tls
the reason I set auth.tls.endpointIdentificationAlgorithm
to an empty string was to disable hostname verification
As an alternative, you can disable host name verification setting the environment variable KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM to an empty string.
So now I tried setting the value for the endpointIdentificationAlgorithm as https and redeploy it with replicaCount as "2".
I have created certificates using the script.. I have followed the same steps to generate certifcates for kafka-0 & kafka-1 and I have used the Common Name as kafka-0.kafka-headless.kafka.svc.cluster.local & kafka-1.kafka-headless.kafka.svc.cluster.local while generating certificates individually.
below is the logs in kafka-0 pod,
[2021-09-13 18:02:10,416] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2021-09-13 18:02:10,459] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:8000,blockEndProducerId:8999) by writing to Zk with path version 9 (kafka.coordinator.transaction.ProducerIdManager)
[2021-09-13 18:02:10,459] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2021-09-13 18:02:10,463] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2021-09-13 18:02:10,467] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2021-09-13 18:02:10,513] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-09-13 18:02:10,544] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2021-09-13 18:02:10,580] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Starting socket server acceptors and processors (kafka.network.SocketServer)
[2021-09-13 18:02:10,588] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(INTERNAL) (kafka.network.SocketServer)
[2021-09-13 18:02:10,618] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(CLIENT) (kafka.network.SocketServer)
[2021-09-13 18:02:10,619] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started socket server acceptors and processors (kafka.network.SocketServer)
[2021-09-13 18:02:10,638] INFO Kafka version: 2.8.0 (org.apache.kafka.common.utils.AppInfoParser)
[2021-09-13 18:02:10,639] INFO Kafka commitId: ebb1d6e21cc92130 (org.apache.kafka.common.utils.AppInfoParser)
[2021-09-13 18:02:10,639] INFO Kafka startTimeMs: 1631556130620 (org.apache.kafka.common.utils.AppInfoParser)
[2021-09-13 18:02:10,642] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
[2021-09-13 18:02:10,848] INFO [broker-0-to-controller-send-thread]: Recorded new controller, from now on will use broker kafka-0.kafka-headless.kafka.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2021-09-13 18:02:10,854] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(elastic-0) (kafka.server.ReplicaFetcherManager)
[2021-09-13 18:02:10,873] INFO [Partition elastic-0 broker=0] Log loaded for partition elastic-0 with initial high watermark 0 (kafka.cluster.Partition)
and below is the logs in kafka-1 pod,
[2021-09-13 18:02:08,037] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-09-13 18:02:08,094] INFO Loading logs from log dirs ArrayBuffer(/bitnami/kafka/data) (kafka.log.LogManager)
[2021-09-13 18:02:08,098] INFO Skipping recovery for all logs in /bitnami/kafka/data since clean shutdown file was found (kafka.log.LogManager)
[2021-09-13 18:02:08,126] INFO Loaded 0 logs in 33ms. (kafka.log.LogManager)
[2021-09-13 18:02:08,127] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2021-09-13 18:02:08,130] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2021-09-13 18:02:08,764] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2021-09-13 18:02:08,769] INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.Acceptor)
[2021-09-13 18:02:09,118] ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.config.ConfigException: Invalid value javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target for configuration A client SSLEngine created with the provided settings can't connect to a server SSLEngine created with those settings.
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:100)
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:74)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:107)
at kafka.network.Processor.<init>(SocketServer.scala:853)
at kafka.network.SocketServer.newProcessor(SocketServer.scala:442)
at kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:299)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
at kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:297)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:262)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:259)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:259)
at kafka.network.SocketServer.startup(SocketServer.scala:131)
at kafka.server.KafkaServer.startup(KafkaServer.scala:285)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
[2021-09-13 18:02:09,122] INFO [KafkaServer id=1] shutting down (kafka.server.KafkaServer)
[2021-09-13 18:02:09,123] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopping socket server request processors (kafka.network.SocketServer)
[2021-09-13 18:02:09,125] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopped socket server request processors (kafka.network.SocketServer)
[2021-09-13 18:02:09,132] INFO Shutting down. (kafka.log.LogManager)
[2021-09-13 18:02:09,160] INFO Shutdown complete. (kafka.log.LogManager)
[2021-09-13 18:02:09,160] INFO [feature-zk-node-event-process-thread]: Shutting down (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
[2021-09-13 18:02:09,161] INFO [feature-zk-node-event-process-thread]: Shutdown completed (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
[2021-09-13 18:02:09,161] INFO [feature-zk-node-event-process-thread]: Stopped (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
[2021-09-13 18:02:09,162] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[2021-09-13 18:02:09,271] INFO Session: 0x1025785f81b0000 closed (org.apache.zookeeper.ZooKeeper)
[2021-09-13 18:02:09,271] INFO EventThread shut down for session: 0x1025785f81b0000 (org.apache.zookeeper.ClientCnxn)
[2021-09-13 18:02:09,275] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
With this error kafka-1 pod was getting restarted multiple times,
[root@k8master1 ~]# kubectl get pods -n kafka -w
NAME READY STATUS RESTARTS AGE
kafka-0 1/1 Running 1 8m34s
kafka-1 0/1 CrashLoopBackOff 6 8m34s
kafka-zookeeper-0 1/1 Running 0 8m34s
Attaching the helm package for reference along with values.yaml. kindly check it once and correct me if i am doing anything wrong. kafka-ssl.zip
Regards, Ganeshbabu R
Hi @GaneshbabuRamamoorthy,
These are the differences between your chart and the original one:
Common subdirectories: ./charts and /home/bitnami/projects/bitnami-charts/bitnami/kafka/charts
diff ./Chart.yaml /home/bitnami/projects/bitnami-charts/bitnami/kafka/Chart.yaml
32c32
< version: 14.0.5
---
> version: 14.1.0
Common subdirectories: ./files and /home/bitnami/projects/bitnami-charts/bitnami/kafka/files
diff ./README.md /home/bitnami/projects/bitnami-charts/bitnami/kafka/README.md
171a172
> | `topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
Common subdirectories: ./templates and /home/bitnami/projects/bitnami-charts/bitnami/kafka/templates
diff ./values.yaml /home/bitnami/projects/bitnami-charts/bitnami/kafka/values.yaml
17c17
< storageClass: "test-db"
---
> storageClass: ""
67,69c67,69
< registry: harbor.kafka-dev.com
< repository: kafka-db/kafka
< tag: 2.8.0
---
> registry: docker.io
> repository: bitnami/kafka
> tag: 2.8.0-debian-10-r84
215,221c215
< extraEnvVars:
< #- name: KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM
< # value: ""
< #- name: KAFKA_TLS_CLIENT_AUTH
< # value: "required"
< #- name: KAFKA_SECURITY_PROTOCOL
< # value: "SSL"
---
> extraEnvVars: []
250,251c244,245
< clientProtocol: tls
< interBrokerProtocol: tls
---
> clientProtocol: plaintext
> interBrokerProtocol: plaintext
257c251
< #mechanisms: plain,scram-sha-256,scram-sha-512
---
> mechanisms: plain,scram-sha-256,scram-sha-512
287c281
< #zookeeperUser: ""
---
> zookeeperUser: ""
320c314
< ## @param auth.tls.existingSecret Name of the existing secret containing the TLS certificates for the Kafka broker
---
> ## @param auth.tls.existingSecret Name of the existing secret containing the TLS certificates for the Kafka brokers
343c337
< existingSecret: "kafka-tls"
---
> existingSecret: ""
350c344
< password: "mavenir"
---
> password: ""
395,396c389
< #listeners: SSL://:9093,PLAINTEXT://:9092
< listeners: INTERNAL://:9093,CLIENT://:9092
---
> listeners: []
401d393
< #advertisedListeners: "SSL://$(MY_POD_NAME).kafka-headless.kafka.svc.cluster.local:9093,SSL://$(MY_POD_NAME).kafka-headless.kafka.svc.cluster.local:9092"
403d394
< #advertisedListeners: SSL://$(MY_POD_IP):9093,PLAINTEXT://$(MY_POD_IP):9092
407,409c398
< listenerSecurityProtocolMap: INTERNAL:SSL,CLIENT:SSL
< #listenerSecurityProtocolMap: SSL:SSL,PLAINTEXT:PLAINTEXT
< #listenerSecurityProtocolMap: "SSL:SSL"
---
> listenerSecurityProtocolMap: ""
412c401
< allowPlaintextListener: yes
---
> allowPlaintextListener: true
415d403
< #interBrokerListenerName: PLAINTEXT
417c405
< #interBrokerListenerName: SSL
---
>
422c410
< replicaCount: 2
---
> replicaCount: 1
499a488,491
> ## @param topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template
> ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods
> ##
> topologySpreadConstraints: {}
554,557c546,549
< initialDelaySeconds: 300
< timeoutSeconds: 300
< failureThreshold: 300
< periodSeconds: 300
---
> initialDelaySeconds: 10
> timeoutSeconds: 5
> failureThreshold: 3
> periodSeconds: 10
570,573c562,565
< initialDelaySeconds: 300
< failureThreshold: 300
< timeoutSeconds: 300
< periodSeconds: 300
---
> initialDelaySeconds: 5
> failureThreshold: 6
> timeoutSeconds: 5
> periodSeconds: 10
766c758
< storageClass: "test-db"
---
> storageClass: ""
1148,1155c1140,1147
< #config: |-
< # jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
< # lowercaseOutputName: true
< # lowercaseOutputLabelNames: true
< # ssl: false
< # {{- if .Values.metrics.jmx.whitelistObjectNames }}
< # whitelistObjectNames: ["{{ join "\",\"" .Values.metrics.jmx.whitelistObjectNames }}"]
< # {{- end }}
---
> config: |-
> jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
> lowercaseOutputName: true
> lowercaseOutputLabelNames: true
> ssl: false
> {{- if .Values.metrics.jmx.whitelistObjectNames }}
> whitelistObjectNames: ["{{ join "\",\"" .Values.metrics.jmx.whitelistObjectNames }}"]
> {{- end }}
1250d1241
<
1258c1249
< enabled: true
---
> enabled: false
1261c1252
< clientUser: "admin"
---
> clientUser: ""
1264c1255
< clientPassword: "mavenir"
---
> clientPassword: ""
1267c1258
< serverUsers: "admin"
---
> serverUsers: ""
1270c1261
< serverPasswords: "mavenir"
---
> serverPasswords: ""
Some things I found:
allowPlaintextListener
needs to be a boolean, while you are setting a string.mechanisms
configuration since you are using tlsHi @miguelaeh Yes I have downloaded the charts few weeks back and I was using it..
I am using bitnami images for kafka and zookeeper docker pull bitnami/kafka:2.8.0 docker pull bitnami/zookeeper:3.7.0
I have pushed the images to my harbor registry and using it that is the difference and I didnt do any customizations in the images.
But now I have clone the latest repository to my local and did the necessary changes in values.yaml for enabling tls.
So these are the changes I have added in values.yaml,
registry: harbor.kafka-dev.com repository: kafka-db/kafka tag: 2.8.0 clientProtocol: tls interBrokerProtocol: tls existingSecret: "kafka-tls" password: "mavenir" storageClass: "mav-db"
Before deploying the chart I have the kept the jks files inside the folder kafka/files/tls and also I have generated the secret and updated in values.yaml
kubectl create secret generic kafka-tls --from-file=/root/ganesh/charts-ssl/bitnami/kafka/files/tls/kafka.truststore.jks --from-file=/root/ganesh/charts-ssl/bitnami/kafka/files/tls/kafka-0.keystore.jks --from-file=/root/ganesh/charts-ssl/bitnami/kafka/files/tls/kafka-1.keystore.jks -n kafka
Then execute the install command,
helm install kafka kafka/ -n kafka
and verified the pod status and I can see kafka-1 pod was getting crashloopbackoff
[root@k8master1 bitnami]# kubectl get pods -n kafka -w
NAME READY STATUS RESTARTS AGE
kafka-0 1/1 Running 1 21m
kafka-1 0/1 CrashLoopBackOff 8 21m
kafka-zookeeper-0 1/1 Running 0 21m
I was getting the same logs and same error in kafka-1,
kafka-1 pod logs,
[2021-09-14 09:30:34,034] ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.config.ConfigException: Invalid value javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target for configuration A client SSLEngine created with the provided settings can't connect to a server SSLEngine created with those settings.
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:100)
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:74)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:107)
at kafka.network.Processor.<init>(SocketServer.scala:853)
at kafka.network.SocketServer.newProcessor(SocketServer.scala:442)
at kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:299)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
at kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:297)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:262)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:259)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:259)
at kafka.network.SocketServer.startup(SocketServer.scala:131)
at kafka.server.KafkaServer.startup(KafkaServer.scala:285)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
kafka-0 pod logs,
[2021-09-14 09:13:32,459] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
[2021-09-14 09:13:32,709] INFO [broker-0-to-controller-send-thread]: Recorded new controller, from now on will use broker kafka-0.kafka-headless.kafka.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)
Not sure where I am making mistake. If I am doing any mistake in while generating certs pls let me know.. I followed the same steps as per the documentation.
https://docs.bitnami.com/kubernetes/infrastructure/kafka/administration/enable-tls/ https://raw.githubusercontent.com/confluentinc/confluent-platform-security-tools/master/kafka-generate-ssl.sh
also I tried exec into the pod and to verify message can be send to topic or not.
I have no name!@kafka-0:/opt/bitnami/kafka/bin$ ./kafka-topics.sh --zookeeper kafka-zookeeper:2181 --create --topic elastic --partitions 1 --replication-factor 1
Created topic elastic.
I have no name!@kafka-0:/opt/bitnami/kafka/bin$ ./kafka-console-producer.sh --topic elastic --bootstrap-server kafka-0.kafka-headless.kafka.svc.cluster.local:9093 --producer.config /opt/bitnami/kafka/config/producer.properties
[2021-09-14 09:44:42,805] WARN The configuration 'ssl.keystore.location' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2021-09-14 09:44:42,805] WARN The configuration 'ssl.truststore.type' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2021-09-14 09:44:42,805] WARN The configuration 'ssl.keystore.type' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2021-09-14 09:44:42,805] WARN The configuration 'ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2021-09-14 09:44:42,805] WARN The configuration 'ssl.keystore.password' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2021-09-14 09:44:42,806] WARN The configuration 'ssl.key.password' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2021-09-14 09:44:42,806] WARN The configuration 'ssl.truststore.password' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
>[2021-09-14 09:44:43,326] WARN [Producer clientId=console-producer] Bootstrap broker kafka-0.kafka-headless.kafka.svc.cluster.local:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2021-09-14 09:44:43,538] WARN [Producer clientId=console-producer] Bootstrap broker kafka-0.kafka-headless.kafka.svc.cluster.local:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
Please correct me if I need to make any changes in the values.yaml.
Attached the helm package for reference.
Thanks, Ganeshbabu R
Hi @GaneshbabuRamamoorthy , I found another issue that seems to solve your problems, it seems the main problem seems to be that there is more than one pod and the certificates are configured only or one of them. Could you check the solution proposed here https://github.com/bitnami/charts/issues/1279, I think it may work since seems the same case
Hi @miguelaeh
Yes I was able to resolve the issue.
https://github.com/bitnami/charts/issues/1279#issuecomment-923990770
Regards, Ganeshbabu R
I am glad to hear that!
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Hi All,
I am using this kafka helm chart to deploy in k8s,
https://github.com/bitnami/charts/tree/master/bitnami/kafka
also I am trying to enable TLS in my kafka setup and below is the documentation I have followed to generate TLS Certificates with JKS format,
https://docs.bitnami.com/kubernetes/infrastructure/kafka/administration/enable-tls/
I have set the replicaCount as "1" in values.yaml file to have single node kafka with zookeeper.
Used this script to generate certs, https://raw.githubusercontent.com/confluentinc/confluent-platform-security-tools/master/kafka-generate-ssl.sh
While creating the certs it was asking CN (common name) and I given the below value, kafka-0.kafka-headless.es.svc.cluster.local
Once the certs are created I have kept the truststore.jks & keystore.jks files under the folder kafka/files/tls
below is the values.yaml I have used and modified it for enabling the tls,
values.zip
Took the below documentation as reference to setup the values in values.yaml,
https://github.com/bitnami/bitnami-docker-kafka
After giving helm install command the pod started running,
helm install kafka kafka/ -n kafka
kubectl get pods -n kafka
when I tried to send message to topic and below is the response I got when I executed the command,
./kafka-console-producer.sh --topic elastic --bootstrap-server kafka-0.kafka-headless.es.svc.cluster.local:9092 --producer.config /opt/bitnami/kafka/config/producer.properties
Please share your thoughts to solve the issue and correct me If I am doing anything wrong.
Thanks, Ganeshbabu R