Closed ahmed-adly-khalil closed 3 years ago
It seems you're using the configuration from this example, right?
after installing the chat using the above values, the kafka pod disappeared (terminated) and when trying to connect to the chart from outside, I'm getting the following error
If the pod is not running it's normal to see an error reporting that the connection can not be done successfully, we should check what is happening and why the pod is not running.
I tried installing the chart and perform some tests, can you reproduce the same steps? See below:
helm install kafka bitnami/kafka -f values.yaml
using the following values.yaml:
externalAccess:
enabled: true
externalAccess:
service:
type: NodePort
externalAccess:
autoDiscovery:
enabled: true
serviceAccount:
create: true
rbac:
create: true
Check that the release was created and the pods are up and running:
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
kafka default 1 2020-11-09 13:11:39.75289376 +0000 UTC deployed kafka-11.8.9 2.6.0
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-0 1/1 Running 0 8m41s
kafka-zookeeper-0 1/1 Running 0 8m41s
Then we can check the installation notes in order to obtain information about how we can access/connect the application:
$ helm get notes kafka
NOTES:
** Please be patient while the chart is being deployed **
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
kafka.default.svc.cluster.local
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
kafka-0.kafka-headless.default.svc.cluster.local:9092
To create a pod that you can use as a Kafka client run the following commands:
kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:2.6.0-debian-10-r57 --namespace default --command -- sleep infinity
kubectl exec --tty -i kafka-client --namespace default -- bash
PRODUCER:
kafka-console-producer.sh \
--broker-list kafka-0.kafka-headless.default.svc.cluster.local:9092 \
--topic test
CONSUMER:
kafka-console-consumer.sh \
--bootstrap-server kafka.default.svc.cluster.local:9092 \
--topic test \
--from-beginning
Hello, Yes, i tried that example. Let me run the test scenario and get back to you now. Note: I'm running the chart as a dependency for my application chart.
will get back to you shortly.
this config gave success message when passing --set rbac.create=true
in the create command, pods are being initialized now:
externalAccess:
enabled: true
service:
type: NodePort
autoDiscovery:
enabled: true
serviceAccount:
create: true
rbac:
create: true
Hello Carlos, It seems it's working now, however, I have followed the instructions on the chart notes:
NAME: kafka
LAST DEPLOYED: Mon Nov 9 08:02:59 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
---------------------------------------------------------------------------------------------
WARNING
By specifying "serviceType=LoadBalancer" and not configuring the authentication
you have most likely exposed the Kafka service externally without any
authentication mechanism.
For security reasons, we strongly suggest that you switch to "ClusterIP" or
"NodePort". As alternative, you can also configure the Kafka authentication.
---------------------------------------------------------------------------------------------
** Please be patient while the chart is being deployed **
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
kafka.default.svc.cluster.local
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
kafka-0.kafka-headless.default.svc.cluster.local:9092
To create a pod that you can use as a Kafka client run the following commands:
kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:2.6.0-debian-10-r57 --namespace default --command -- sleep infinity
kubectl exec --tty -i kafka-client --namespace default -- bash
PRODUCER:
kafka-console-producer.sh \
--broker-list kafka-0.kafka-headless.default.svc.cluster.local:9092 \
--topic test
CONSUMER:
kafka-console-consumer.sh \
--bootstrap-server kafka.default.svc.cluster.local:9092 \
--topic test \
--from-beginning
To connect to your Kafka server from outside the cluster, follow the instructions below:
Kafka brokers domain: You can get the external node IP from the Kafka configuration file with the following commands (Check the EXTERNAL listener)
1. Obtain the pod name:
kubectl get pods --namespace default -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka"
2. Obtain pod configuration:
kubectl exec -it KAFKA_POD -- cat /opt/bitnami/kafka/config/server.properties | grep advertised.listeners
Kafka brokers port: You will have a different node port for each Kafka broker. You can get the list of configured node ports using the command below:
echo "$(kubectl get svc --namespace default -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].spec.ports[0].nodePort}' | tr ' ' '\n')"
and run this step:
kubectl exec -it kafka-0 -- cat /opt/bitnami/kafka/config/server.properties | grep advertised.listeners
which resulted in:
advertised.listeners=INTERNAL://kafka-0.kafka-headless.default.svc.cluster.local:9093,CLIENT://kafka-0.kafka-headless.default.svc.cluster.local:9092,EXTERNAL://24.77.16.112:30072
then i run:
kafka-console-producer --broker-list 24.77.16.112:30072 --topic test
and got:
>[2020-11-09 08:14:02,854] WARN [Producer clientId=console-producer] Connection to node -1 (/24.77.16.112:30072) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 08:14:02,855] WARN [Producer clientId=console-producer] Bootstrap broker 24.77.16.112:30072 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2020-11-09 08:14:02,955] WARN [Producer clientId=console-producer] Connection to node -1 (/24.77.16.112:30072) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 08:14:02,955] WARN [Producer clientId=console-producer] Bootstrap broker 24.77.16.112:30072 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2020-11-09 08:14:03,061] WARN [Producer clientId=console-producer] Connection to node -1 (/24.77.16.112:30072) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 08:14:03,061] WARN [Producer clientId=console-producer] Bootstrap broker 24.77.16.112:30072 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2020-11-09 08:14:03,321] WARN [Producer clientId=console-producer] Connection to node -1 (/24.77.16.112:30072) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 08:14:03,321] WARN [Producer clientId=console-producer] Bootstrap broker 24.77.16.112:30072 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2020-11-09 08:14:03,685] WARN [Producer clientId=console-producer] Connection to node -1 (/24.77.16.112:30072) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 08:14:03,685] WARN [Producer clientId=console-producer] Bootstrap broker 24.77.16.112:30072 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
any ideas?
after some investigation:
running k get po
:
NAME READY STATUS RESTARTS AGE
kafka-0 0/1 CrashLoopBackOff 20 3h19m
kafka-zookeeper-0 0/1. Pending 0 3h19m
then
k logs kafka-0
result:
[2020-11-09 17:20:58,377] INFO Connecting to zookeeper on kafka-zookeeper (kafka.server.KafkaServer)
[2020-11-09 17:20:58,415] INFO [ZooKeeperClient Kafka server] Initializing a new session to kafka-zookeeper. (kafka.zookeeper.ZooKeeperClient)
[2020-11-09 17:20:58,428] INFO Client environment:zookeeper.version=3.5.8-f439ca583e70862c3068a1f2a7d4d068eec33315, built on 05/04/2020 15:53 GMT (org.apache.zookeeper.ZooKeeper)
[2020-11-09 17:20:58,429] INFO Client environment:host.name=kafka-0.kafka-headless.default.svc.cluster.local (org.apache.zookeeper.ZooKeeper)
[2020-11-09 17:20:58,429] INFO Client environment:java.version=11.0.8 (org.apache.zookeeper.ZooKeeper)
[2020-11-09 17:20:58,430] INFO Client environment:java.vendor=BellSoft (org.apache.zookeeper.ZooKeeper)
[2020-11-09 17:20:58,430] INFO Client environment:java.home=/opt/bitnami/java (org.apache.zookeeper.ZooKeeper)
[2020-11-09 17:20:58,430] INFO Client environment:java.class.path=/opt/bitnami/kafka/bin/../libs/activation-1.1.1.jar:/opt/bitnami/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/bitnami/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/bitnami/kafka/bin/../libs/commons-cli-1.4.jar:/opt/bitnami/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-api-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/connect-basic-auth-extension-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/connect-file-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/connect-json-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/connect-mirror-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/connect-mirror-client-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/connect-runtime-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/connect-transforms-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-utils-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-annotations-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-core-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-databind-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-dataformat-csv-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-datatype-jdk8-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-base-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-paranamer-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-scala_2.12-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/bitnami/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/opt/bitnami/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/opt/bitnami/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/bitnami/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/bitnami/kafka/bin/../libs/javassist-3.26.0-GA.jar:/opt/bitnami/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/bitnami/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/bitnami/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/jersey-client-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-common-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-server-2.28.jar:/opt/bitnami/kafka/bin/../libs/jetty-client-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-continuation-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-http-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-io-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-security-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-server-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlet-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlets-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-util-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/bitnami/kafka/bin/../libs/kafka-clients-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-log4j-appender-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-examples-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-scala_2.12-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-test-utils-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-tools-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.12-2.6.0-sources.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.12-2.6.0.jar:/opt/bitnami/kafka/bin/../libs/log4j-1.2.17.jar:/opt/bitnami/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/bitnami/kafka/bin/../libs/maven-artifact-3.6.3.jar:/opt/bitnami/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/bitnami/kafka/bin/../libs/netty-buffer-4.1.50.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-codec-4.1.50.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-common-4.1.50.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-handler-4.1.50.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-resolver-4.1.50.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-4.1.50.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-native-epoll-4.1.50.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-native-unix-common-4.1.50.Final.jar:/opt/bitnami/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/bitnami/kafka/bin/../libs/paranamer-2.8.jar:/opt/bitnami/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/bitnami/kafka/bin/../libs/reflections-0.9.12.jar:/opt/bitnami/kafka/bin/../libs/rocksdbjni-5.18.4.jar:/opt/bitnami/kafka/bin/../libs/scala-collection-compat_2.12-2.1.6.jar:/opt/bitnami/kafka/bin/../libs/scala-java8-compat_2.12-0.9.1.jar:/opt/bitnami/kafka/bin/../libs/scala-library-2.12.11.jar:/opt/bitnami/kafka/bin/../libs/scala-logging_2.12-3.9.2.jar:/opt/bitnami/kafka/bin/../libs/scala-reflect-2.12.11.jar:/opt/bitnami/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/bitnami/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/bitnami/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/bitnami/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-3.5.8.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-jute-3.5.8.jar:/opt/bitnami/kafka/bin/../libs/zstd-jni-1.4.4-7.jar (org.apache.zookeeper.ZooKeeper)
[2020-11-09 17:20:58,431] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
it's obvious that kafka pod can't access zookeeper because zookeeper pod is pending. the question is why zookeeper is pending?
after more investigation it turned out there was insufficient cpu. i deleted minikube, increased the cpu and started over.
now...
k get all
->
NAME READY STATUS RESTARTS AGE
pod/kafka-0 1/1 Running 0 16m
pod/kafka-zookeeper-0 1/1 Running 0 16m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kafka ClusterIP 10.96.164.105 <none> 9092/TCP 16m
service/kafka-0-external NodePort 10.109.85.233 <none> 9094:31134/TCP 16m
service/kafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 16m
service/kafka-zookeeper ClusterIP 10.102.73.25 <none> 2181/TCP,2888/TCP,3888/TCP 16m
service/kafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 16m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19m
NAME READY AGE
statefulset.apps/kafka 1/1 16m
statefulset.apps/kafka-zookeeper 1/1 16m
minikube service list
->
|-------------|------------------------------------|----------------|----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|------------------------------------|----------------|----------------------------|
| default | kafka | No node port |
| default | kafka-0-external | tcp-kafka/9094 | http://192.168.64.23:31134 |
| default | kafka-headless | No node port |
| default | kafka-zookeeper | No node port |
| default | kafka-zookeeper-headless | No node port |
| default | kubernetes | No node port |
| kube-system | ingress-nginx-controller-admission | No node port |
| kube-system | kube-dns | No node port |
|-------------|------------------------------------|----------------|----------------------------|
kafka-topics --bootstrap-server 192.168.64.23:31134 --list
[2020-11-09 11:52:59,498] WARN [AdminClient clientId=adminclient-1] Connection to node 0 (/24.77.16.112:31134) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 11:52:59,636] WARN [AdminClient clientId=adminclient-1] Connection to node 0 (/24.77.16.112:31134) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 11:52:59,809] WARN [AdminClient clientId=adminclient-1] Connection to node 0 (/24.77.16.112:31134) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 11:53:00,077] WARN [AdminClient clientId=adminclient-1] Connection to node 0 (/24.77.16.112:31134) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 11:53:00,599] WARN [AdminClient clientId=adminclient-1] Connection to node 0 (/24.77.16.112:31134) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
k logs kafka-0
->
[2020-11-09 17:37:56,308] INFO Registered broker 0 at path /brokers/ids/0 with addresses: INTERNAL://kafka-0.kafka-headless.default.svc.cluster.local:9093,CLIENT://kafka-0.kafka-headless.default.svc.cluster.local:9092,EXTERNAL://24.77.16.112:31134, czxid (broker epoch): 24 (kafka.zk.KafkaZkClient)
[2020-11-09 17:37:56,372] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-11-09 17:37:56,375] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-11-09 17:37:56,376] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-11-09 17:37:56,397] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2020-11-09 17:37:56,409] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2020-11-09 17:37:56,411] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2020-11-09 17:37:56,429] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 15 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-11-09 17:37:56,432] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2020-11-09 17:37:56,460] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-11-09 17:37:56,462] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-11-09 17:37:56,479] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2020-11-09 17:37:56,508] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-11-09 17:37:56,554] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-11-09 17:37:56,587] INFO [SocketServer brokerId=0] Starting socket server acceptors and processors (kafka.network.SocketServer)
[2020-11-09 17:37:56,600] INFO [SocketServer brokerId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(INTERNAL) (kafka.network.SocketServer)
[2020-11-09 17:37:56,603] INFO [SocketServer brokerId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(EXTERNAL) (kafka.network.SocketServer)
[2020-11-09 17:37:56,606] INFO [SocketServer brokerId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(CLIENT) (kafka.network.SocketServer)
[2020-11-09 17:37:56,614] INFO [SocketServer brokerId=0] Started socket server acceptors and processors (kafka.network.SocketServer)
[2020-11-09 17:37:56,626] INFO Kafka version: 2.6.0 (org.apache.kafka.common.utils.AppInfoParser)
[2020-11-09 17:37:56,626] INFO Kafka commitId: 62abe01bee039651 (org.apache.kafka.common.utils.AppInfoParser)
[2020-11-09 17:37:56,626] INFO Kafka startTimeMs: 1604943476615 (org.apache.kafka.common.utils.AppInfoParser)
[2020-11-09 17:37:56,631] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
still bad luck :(
i tried also the following, still bad luck :(
kafka-console-producer --bootstrap-server 192.168.64.23:31134 --topic test
>hi
[2020-11-09 18:42:44,561] WARN [Producer clientId=console-producer] Connection to node 0 (/24.77.16.112:31134) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 18:42:44,697] WARN [Producer clientId=console-producer] Connection to node 0 (/24.77.16.112:31134) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 18:42:44,869] WARN [Producer clientId=console-producer] Connection to node 0 (/24.77.16.112:31134) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 18:42:45,156] WARN [Producer clientId=console-producer] Connection to node 0 (/24.77.16.112:31134) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 18:42:45,551] WARN [Producer clientId=console-producer] Connection to node 0 (/24.77.16.112:31134) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-09 18:42:46,395] WARN [Producer clientId=console-producer] Connection to node 0 (/24.77.16.112:31134) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Note: I'm running the chart as a dependency for my application chart.
Are you able to first follow the same process without a parent chart? Just by deploying the Kafka chart as a regular chart.
Is your parent chart deployed in the same cluster? If yes, can you follow the instructions to connect to Kafka from within your cluster? Per your commands, it seems you're using the instructions to access from outside the cluster. You can see the installation notes at any time by running helm get notes RELEASE_NAME
Hello Carlos, I'm installing the chat as standalone now to get it to work first.
All the above logs are from the standalone setup.
Best Regards Ahmed
On Tue., Nov. 10, 2020, 9:27 a.m. Carlos Rodríguez Hernández < notifications@github.com> wrote:
Note: I'm running the chart as a dependency for my application chart.
Are you able to first follow the same process without a parent chart? Just by deploying the Kafka chart as a regular chart.
Is your parent chart deployed in the same cluster? If yes, can you follow the instructions to connect to Kafka from within your cluster? Per your commands, it seems you're using the instructions to access from outside the cluster. You can see the installation notes at any time by running helm get notes RELEASE_NAME
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bitnami/charts/issues/4240#issuecomment-724774929, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIUQUZOESA2FSZKBIPXUAN3SPFLURANCNFSM4TM6UTSQ .
I just want to know if anyone got external access to work? As a standalone chart installation, let's forget about the subchart situation for now.
Best Regards Ahmed
On Tue., Nov. 10, 2020, 9:36 a.m. Ahmed Adly ahmedadly@gmail.com wrote:
Hello Carlos, I'm installing the chat as standalone now to get it to work first.
All the above logs are from the standalone setup.
Best Regards Ahmed
On Tue., Nov. 10, 2020, 9:27 a.m. Carlos Rodríguez Hernández < notifications@github.com> wrote:
Note: I'm running the chart as a dependency for my application chart.
Are you able to first follow the same process without a parent chart? Just by deploying the Kafka chart as a regular chart.
Is your parent chart deployed in the same cluster? If yes, can you follow the instructions to connect to Kafka from within your cluster? Per your commands, it seems you're using the instructions to access from outside the cluster. You can see the installation notes at any time by running helm get notes RELEASE_NAME
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bitnami/charts/issues/4240#issuecomment-724774929, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIUQUZOESA2FSZKBIPXUAN3SPFLURANCNFSM4TM6UTSQ .
There are 4 scenarios to configure Kafka for external access, see https://github.com/bitnami/charts/tree/master/bitnami/kafka#accessing-kafka-brokers-from-outside-the-cluster
Maybe the option you are using is not working in this specific environment, please, take into account some of the methods requires RBAC rules and policies. What is the configuration you're using? Can you give a try to another method?
I'm having the same issue. I tried all 4 scenarios mentioned by @carrodher, none of them worked for me. I've been trying to connect to kafka using Kafka tool.
Hi @klubi
Could you please provide more information?
Sorry @juan131, I should have done that in the first place.
I'm running kubernetes as part of official Docker app for MacOS (latest version).
I'm using two values files, main one (values.yaml without any kafka related values) and second one (with kafka related values). Kafka one looks like this:
kafka:
enabled: true
externalAccess:
enabled: true
service:
type: NodePort
autoDiscovery:
enabled: true
serviceAccount:
create: true
rbac:
create: true
@klubi @ahmed-adly-khalil
How are clients configured? Could you paste a copy of your client config?
Hi @klubi
I installed Kafka using your values.yaml and these are the services created:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka ClusterIP 10.227.242.119 <none> 9092/TCP 2m27s
kafka-0-external NodePort 10.227.247.246 <none> 9094:31262/TCP 2m27s
As you can see, there's a service kafka-0-external that expose a node port (31262 in my case). Therefore, I need to configure my client to connect to Kafka using my cluster's IP and the port 31262. For instance:
$ kafka-console-producer.sh --broker-list A.B.C.D:31262 --topic test
note: you need to substitute
A.B.C.D
with your actual cluster IP
@juan131 my services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
stratus-kafka ClusterIP 10.108.101.161 <none> 9092/TCP 4d23h
stratus-kafka-0-external NodePort 10.101.10.239 <none> 9094:32291/TCP 4d23h
stratus-kafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 4d23h
stratus-mongodb NodePort 10.110.208.144 <none> 27017:30017/TCP 4d23h
stratus-zookeeper ClusterIP 10.105.219.188 <none> 2181/TCP,2888/TCP,3888/TCP 4d23h
stratus-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 4d23h
then I get this
$ kafka-console-producer --broker-list 10.101.10.239:32291 --topic test
>[2020-11-18 21:34:05,434] WARN [Producer clientId=console-producer] Connection to node -1 (/10.101.10.239:32291) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-18 21:34:05,434] WARN [Producer clientId=console-producer] Bootstrap broker 10.101.10.239:32291 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2020-11-18 21:35:23,348] WARN [Producer clientId=console-producer] Connection to node -1 (/10.101.10.239:32291) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-18 21:35:23,348] WARN [Producer clientId=console-producer] Bootstrap broker 10.101.10.239:32291 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2020-11-18 21:36:41,201] WARN [Producer clientId=console-producer] Connection to node -1 (/10.101.10.239:32291) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-11-18 21:36:41,202] WARN [Producer clientId=console-producer] Bootstrap broker 10.101.10.239:32291 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
as you can see, there is a pod with mongo, and that one is accessible without any issues
@klubi From what i see it seems the producer is trying to connect from outside the cluster using and "internal" cluster ip. You might need to set your domain to say "localhost" for example so that the node port maps accordingly. here is a sample config that got mine working:
externalAccess.enabled: true
# Set this to localhost (if using kafka locally. eg: docker for mac), or an ip from one of your external nodes using the nodeport below.
externalAccess.service.domain: localhost
externalAccess.service.type: NodePort
# I use a fixed node port so it persists on helm upgrade.
externalAccess.service.nodePorts[0]: 30902
Hi @klubi
You're using the IP 10.101.10.239
, which is internal to Kubernetes. Instead, use the external IP of any of your cluster nodes. You can obtain them running:
$ kubectl get nodes -o wide
@juan131 unfortunately docker for desktop does not expose externalIP for nodes
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
docker-desktop Ready master 8d v1.19.3 192.168.65.3 <none> Docker Desktop 5.4.39-linuxkit docker://19.3.13
According to docs I found localhost
should be used, but that does not work either.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
stratus-kafka ClusterIP 10.106.1.28 <none> 9092/TCP 13h
stratus-kafka-0-external NodePort 10.99.133.1 <none> 9094:32318/TCP 13h
stratus-kafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 13h
stratus-mongodb NodePort 10.109.97.133 <none> 27017:30017/TCP 13h
stratus-zookeeper ClusterIP 10.100.234.164 <none> 2181/TCP,2888/TCP,3888/TCP 13h
stratus-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 13h
$ kafka-console-producer --broker-list localhost:32318 --topic test
>the
[2020-11-19 11:22:25,498] WARN [Producer clientId=console-producer] Error connecting to node stratus-kafka-0.stratus-kafka-headless.default.svc.cluster.local:32318 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient)
java.net.UnknownHostException: stratus-kafka-0.stratus-kafka-headless.default.svc.cluster.local: nodename nor servname provided, or not known
...
@klubi Sorry if I am asking an obvious question, but did you specify the externalAccess service domain to localhost in your values config? ‘Cos I had the same problem until I did so.
@chukaofili that was it!!
to recap; If you are using docker desktop on mac, this is what your values.yaml
should look like:
kafka:
enabled: true
externalAccess:
enabled: true
service:
type: NodePort
domain: localhost
autoDiscovery:
enabled: true
serviceAccount:
create: true
rbac:
create: true
and kafka must be accessed via localhost
now... how do I expose zookeeper 2181 port?
Great! I'm glad you were able to solve it @klubi Why do you need to expose Zookeeper? You can set:
kafka:
zookeeper:
service:
type: NodePort
But I'd like to understand why you need to expose Zookeeper.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Hello everyone, Hope everything is great.
After installing the chart following the doc here: https://github.com/bitnami/charts/tree/master/bitnami/kafka/#installing-the-chart
I have enabled external access using the following config:
after installing the chat using the above values, the kafka pod disappeared (terminated) and when trying to connect to the chart from outside, I'm getting the following error:
Appreciate your support. Thank you