bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
9k stars 9.21k forks source link

Enabling jmx metrics breaks kafka console commands #1522

Closed ghost closed 2 years ago

ghost commented 5 years ago

Which chart:

bitnami/kafka chart version 6.1.2

Description

When enabling the jmx exporter for the bitnami/kafka chart, we run into a port confliction issue when attempting to use the kafka console commands.

Steps to reproduce the issue:

  1. helm install --name kafka bitnami/kafka --set metrics.jmx.enabled=true
  2. kubectl exec -ti <kafka-pod> /bin/bash
  3. cd opt/bitnami/kafka/bin
  4. Attempt to run any console command.

Describe the results you received:

I have no name!@kafka-0:/opt/bitnami/kafka/bin$ kafka-topics.sh --bootstrap-server localhost:9092 --list
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 5555; nested exception is:
    java.net.BindException: Address already in use (Bind failed)
sun.management.AgentConfigurationError: java.rmi.server.ExportException: Port already in use: 5555; nested exception is:
    java.net.BindException: Address already in use (Bind failed)
    at sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(ConnectorBootstrap.java:480)
    at sun.management.Agent.startAgent(Agent.java:262)
    at sun.management.Agent.startAgent(Agent.java:452)
Caused by: java.rmi.server.ExportException: Port already in use: 5555; nested exception is:
    java.net.BindException: Address already in use (Bind failed)
    at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:346)
    at sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:254)
    at sun.rmi.transport.tcp.TCPEndpoint.exportObject(TCPEndpoint.java:411)
    at sun.rmi.transport.LiveRef.exportObject(LiveRef.java:147)
    at sun.rmi.server.UnicastServerRef.exportObject(UnicastServerRef.java:237)
    at sun.rmi.registry.RegistryImpl.setup(RegistryImpl.java:213)
    at sun.rmi.registry.RegistryImpl.<init>(RegistryImpl.java:173)
    at sun.management.jmxremote.SingleEntryRegistry.<init>(SingleEntryRegistry.java:49)
    at sun.management.jmxremote.ConnectorBootstrap.exportMBeanServer(ConnectorBootstrap.java:816)
    at sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(ConnectorBootstrap.java:468)
    ... 2 more
Caused by: java.net.BindException: Address already in use (Bind failed)
    at java.net.PlainSocketImpl.socketBind(Native Method)
    at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
    at java.net.ServerSocket.bind(ServerSocket.java:375)
    at java.net.ServerSocket.<init>(ServerSocket.java:237)
    at java.net.ServerSocket.<init>(ServerSocket.java:128)
    at sun.rmi.transport.proxy.RMIDirectSocketFactory.createServerSocket(RMIDirectSocketFactory.java:45)
    at sun.rmi.transport.proxy.RMIMasterSocketFactory.createServerSocket(RMIMasterSocketFactory.java:345)
    at sun.rmi.transport.tcp.TCPEndpoint.newServerSocket(TCPEndpoint.java:666)
    at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:335)
    ... 11 more

Describe the results you expected:

When metrics.jmx.enabled=false we are able to use the console commands just fine.

Additional information you deem important (e.g. issue happens only occasionally):

We have also attempted to change jmxPort to something other that the default port, but no dice.

Version of Helm and Kubernetes:

Client: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:41:55Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
javsalgar commented 5 years ago

Hi,

I was unable to reproduce the issue. As you can see here:

 130 ❯ kubectl get pods  -w
NAME                       READY   STATUS    RESTARTS   AGE
mothy-numbat-kafka-0       2/2     Running   5          40m
mothy-numbat-zookeeper-0   1/1     Running   1          40m
rabbitmq-0                 1/1     Running   0          65m

It took some restarts because Zookeeper was not available.

mothy-numbat-kafka-0 kafka  08:14:46.78 Welcome to the Bitnami kafka container
mothy-numbat-kafka-0 kafka  08:14:46.78 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-kafka
mothy-numbat-kafka-0 kafka  08:14:46.78 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-kafka/issues
mothy-numbat-kafka-0 kafka  08:14:46.78 Send us your feedback at containers@bitnami.com
mothy-numbat-kafka-0 kafka  08:14:46.78
mothy-numbat-kafka-0 kafka  08:14:46.79 INFO  ==> ** Starting Kafka setup **
mothy-numbat-kafka-0 kafka  08:14:46.83 WARN  ==> You set the environment variable ALLOW_PLAINTEXT_LISTENER=yes. For safety reasons, do not use this flag in a production environment.
mothy-numbat-kafka-0 kafka  08:14:46.84 INFO  ==> Initializing Kafka...
mothy-numbat-kafka-0 kafka  08:14:46.84 INFO  ==> No injected configuration files found, creating default config files
mothy-numbat-kafka-0 kafka  08:14:46.99 INFO  ==> ** Kafka setup finished! **
mothy-numbat-kafka-0 kafka
mothy-numbat-kafka-0 kafka  08:14:47.00 INFO  ==> ** Starting Kafka **
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:47,684] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,002] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,003] INFO starting (kafka.server.KafkaServer)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,004] INFO Connecting to zookeeper on mothy-numbat-zookeeper (kafka.server.KafkaServer)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,021] INFO [ZooKeeperClient Kafka server] Initializing a new session to mothy-numbat-zookeeper. (kafka.zookeeper.ZooKeeperClient)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,025] INFO Client environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,026] INFO Client environment:host.name=mothy-numbat-kafka-0.mothy-numbat-kafka-headless.default.svc.cluster.local (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,026] INFO Client environment:java.version=1.8.0_232 (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,026] INFO Client environment:java.vendor=AdoptOpenJDK (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,026] INFO Client environment:java.home=/opt/bitnami/java (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,026] INFO Client environment:java.class.path=/opt/bitnami/kafka/bin/../libs/activation-1.1.1.jar:/opt/bitnami/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/bitnami/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/bitnami/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-api-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/connect-basic-auth-extension-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/connect-file-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/connect-json-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/connect-runtime-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/connect-transforms-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/guava-20.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-utils-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-annotations-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-core-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-databind-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-dataformat-csv-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-datatype-jdk8-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-base-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-paranamer-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-scala_2.11-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/opt/bitnami/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/opt/bitnami/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/bitnami/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/bitnami/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/bitnami/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/jersey-client-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-common-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-server-2.28.jar:/opt/bitnami/kafka/bin/../libs/jetty-client-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-continuation-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-http-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-io-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-security-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-server-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlet-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlets-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-util-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/bitnami/kafka/bin/../libs/jsr305-3.0.2.jar:/opt/bitnami/kafka/bin/../libs/kafka-clients-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-log4j-appender-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-examples-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-scala_2.11-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-test-utils-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-tools-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.11-2.3.0-sources.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.11-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/log4j-1.2.17.jar:/opt/bitnami/kafka/bin/../libs/lz4-java-1.6.0.jar:/opt/bitnami/kafka/bin/../libs/maven-artifact-3.6.1.jar:/opt/bitnami/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/bitnami/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/bitnami/kafka/bin/../libs/paranamer-2.8.jar:/opt/bitnami/kafka/bin/../libs/plexus-utils-3.2.0.jar:/opt/bitnami/kafka/bin/../libs/reflections-0.9.11.jar:/opt/bitnami/kafka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/bitnami/kafka/bin/../libs/scala-library-2.11.12.jar:/opt/bitnami/kafka/bin/../libs/scala-logging_2.11-3.9.0.jar:/opt/bitnami/kafka/bin/../libs/scala-reflect-2.11.12.jar:/opt/bitnami/kafka/bin/../libs/slf4j-api-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/slf4j-log4j12-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/bitnami/kafka/bin/../libs/spotbugs-annotations-3.1.9.jar:/opt/bitnami/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/opt/bitnami/kafka/bin/../libs/zkclient-0.11.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-3.4.14.jar:/opt/bitnami/kafka/bin/../libs/zstd-jni-1.4.0-1.jar (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,027] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,027] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,027] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,027] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,027] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,027] INFO Client environment:os.version=4.9.0-11-amd64 (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,027] INFO Client environment:user.name=? (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,027] INFO Client environment:user.home=? (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,027] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,029] INFO Initiating client connection, connectString=mothy-numbat-zookeeper sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@7c7b252e (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,042] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:48,054] INFO Opening socket connection to server mothy-numbat-zookeeper/10.98.35.187:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:54,044] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:54,048] WARN Client session timed out, have not heard from server in 6007ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:54,151] INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:54,153] INFO EventThread shut down for session: 0x0 (org.apache.zookeeper.ClientCnxn)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:54,154] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:54,157] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
mothy-numbat-kafka-0 kafka kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
mothy-numbat-kafka-0 kafka  at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply$mcV$sp(ZooKeeperClient.scala:258)
mothy-numbat-kafka-0 kafka  at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:254)
mothy-numbat-kafka-0 kafka  at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:254)
mothy-numbat-kafka-0 kafka  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
mothy-numbat-kafka-0 kafka  at kafka.zookeeper.ZooKeeperClient.kafka$zookeeper$ZooKeeperClient$$waitUntilConnected(ZooKeeperClient.scala:254)
mothy-numbat-kafka-0 kafka  at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:112)
mothy-numbat-kafka-0 kafka  at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1825)
mothy-numbat-kafka-0 kafka  at kafka.server.KafkaServer.kafka$server$KafkaServer$$createZkClient$1(KafkaServer.scala:363)
mothy-numbat-kafka-0 kafka  at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:387)
mothy-numbat-kafka-0 kafka  at kafka.server.KafkaServer.startup(KafkaServer.scala:207)
mothy-numbat-kafka-0 kafka  at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
mothy-numbat-kafka-0 kafka  at kafka.Kafka$.main(Kafka.scala:84)
mothy-numbat-kafka-0 kafka  at kafka.Kafka.main(Kafka.scala)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:54,160] INFO shutting down (kafka.server.KafkaServer)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:54,164] INFO shut down completed (kafka.server.KafkaServer)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:54,164] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
mothy-numbat-kafka-0 kafka [2019-10-25 08:14:54,169] INFO shutting down (kafka.server.KafkaServer)
^C
  mongodb-sharded ?:3  ~/projects/bitnami-charts/bitnami/kafka                           10:15:08  jsalmeron
 130 ❯ kubectl get pods
NAME                       READY   STATUS             RESTARTS   AGE
mothy-numbat-kafka-0       1/2     CrashLoopBackOff   3          115s
mothy-numbat-zookeeper-0   0/1     Running            1          115s
rabbitmq-0                 1/1     Running            0          26m
  mongodb-sharded ?:3  ~/projects/bitnami-charts/bitnami/kafka                           10:15:11  jsalmeron
❯ stern mothy-numbat-zookeeper-0
+ mothy-numbat-zookeeper-0 › zookeeper
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.08
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.08 Welcome to the Bitnami zookeeper container
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.08 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-zookeeper
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.08 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-zookeeper/issues
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.08 Send us your feedback at containers@bitnami.com
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.08
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.09 INFO  ==> ** Starting ZooKeeper setup **
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.10 WARN  ==> You have set the environment variable ALLOW_ANONYMOUS_LOGIN=yes. For safety reasons, do not use this flag in a production environment.
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.11 INFO  ==> Initializing ZooKeeper...
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.11 INFO  ==> No injected configuration file found, creating default config files...
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.14 INFO  ==> No additional servers were specified. ZooKeeper will run in standalone mode...
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.14 INFO  ==> Deploying ZooKeeper with persisted data...
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.14 INFO  ==> ** ZooKeeper setup finished! **
mothy-numbat-zookeeper-0 zookeeper
mothy-numbat-zookeeper-0 zookeeper zookeeper 08:14:50.15 INFO  ==> ** Starting ZooKeeper **
mothy-numbat-zookeeper-0 zookeeper /opt/bitnami/java/bin/java
mothy-numbat-zookeeper-0 zookeeper ZooKeeper JMX enabled by default
mothy-numbat-zookeeper-0 zookeeper Using config: /opt/bitnami/zookeeper/bin/../conf/zoo.cfg

^C
  mongodb-sharded ?:3  ~/projects/bitnami-charts/bitnami/kafka                           10:15:33  jsalmeron
 130 ❯ kubectl get pods
NAME                       READY   STATUS             RESTARTS   AGE
mothy-numbat-kafka-0       1/2     CrashLoopBackOff   3          2m19s
mothy-numbat-zookeeper-0   0/1     Running            1          2m19s
rabbitmq-0                 1/1     Running            0          27m
  mongodb-sharded ?:3  ~/projects/bitnami-charts/bitnami/kafka                           10:15:35  jsalmeron
❯ kubectl get pods  -w
NAME                       READY   STATUS             RESTARTS   AGE
mothy-numbat-kafka-0       1/2     CrashLoopBackOff   3          2m23s
mothy-numbat-zookeeper-0   0/1     Running            1          2m23s
rabbitmq-0                 1/1     Running            0          27m
mothy-numbat-kafka-0       1/2     Running            4          2m26s
mothy-numbat-zookeeper-0   1/1     Running            1          2m32s
mothy-numbat-kafka-0       1/2     Error              4          2m34s
mothy-numbat-kafka-0       1/2     CrashLoopBackOff   4          2m36s
^C
  mongodb-sharded ?:3  ~/projects/bitnami-charts/bitnami/kafka                           10:16:01  jsalmeron
 1 ❯ stern mothy-numbat-kafka-0
+ mothy-numbat-kafka-0 › jmx-exporter
mothy-numbat-kafka-0 jmx-exporter VM settings:
mothy-numbat-kafka-0 jmx-exporter     Max. Heap Size (Estimated): 24.67G
mothy-numbat-kafka-0 jmx-exporter     Ergonomics Machine Class: server
mothy-numbat-kafka-0 jmx-exporter     Using VM: OpenJDK 64-Bit Server VM
mothy-numbat-kafka-0 jmx-exporter
+ mothy-numbat-kafka-0 › kafka
mothy-numbat-kafka-0 kafka  08:17:18.77
mothy-numbat-kafka-0 kafka  08:17:18.77 Welcome to the Bitnami kafka container
mothy-numbat-kafka-0 kafka  08:17:18.77 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-kafka
mothy-numbat-kafka-0 kafka  08:17:18.77 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-kafka/issues
mothy-numbat-kafka-0 kafka  08:17:18.78 Send us your feedback at containers@bitnami.com
mothy-numbat-kafka-0 kafka  08:17:18.78
mothy-numbat-kafka-0 kafka  08:17:18.78 INFO  ==> ** Starting Kafka setup **
mothy-numbat-kafka-0 kafka  08:17:18.82 WARN  ==> You set the environment variable ALLOW_PLAINTEXT_LISTENER=yes. For safety reasons, do not use this flag in a production environment.
mothy-numbat-kafka-0 kafka  08:17:18.83 INFO  ==> Initializing Kafka...
mothy-numbat-kafka-0 kafka  08:17:18.84 INFO  ==> No injected configuration files found, creating default config files
mothy-numbat-kafka-0 kafka
mothy-numbat-kafka-0 kafka  08:17:18.98 INFO  ==> ** Kafka setup finished! **
mothy-numbat-kafka-0 kafka  08:17:18.99 INFO  ==> ** Starting Kafka **
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,669] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,964] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,964] INFO starting (kafka.server.KafkaServer)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,965] INFO Connecting to zookeeper on mothy-numbat-zookeeper (kafka.server.KafkaServer)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,981] INFO [ZooKeeperClient Kafka server] Initializing a new session to mothy-numbat-zookeeper. (kafka.zookeeper.ZooKeeperClient)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,986] INFO Client environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,986] INFO Client environment:host.name=mothy-numbat-kafka-0.mothy-numbat-kafka-headless.default.svc.cluster.local (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,986] INFO Client environment:java.version=1.8.0_232 (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,986] INFO Client environment:java.vendor=AdoptOpenJDK (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,986] INFO Client environment:java.home=/opt/bitnami/java (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,986] INFO Client environment:java.class.path=/opt/bitnami/kafka/bin/../libs/activation-1.1.1.jar:/opt/bitnami/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/bitnami/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/bitnami/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-api-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/connect-basic-auth-extension-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/connect-file-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/connect-json-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/connect-runtime-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/connect-transforms-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/guava-20.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-utils-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-annotations-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-core-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-databind-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-dataformat-csv-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-datatype-jdk8-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-base-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-paranamer-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-scala_2.11-2.9.9.jar:/opt/bitnami/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/opt/bitnami/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/opt/bitnami/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/bitnami/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/bitnami/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/bitnami/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/jersey-client-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-common-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-server-2.28.jar:/opt/bitnami/kafka/bin/../libs/jetty-client-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-continuation-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-http-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-io-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-security-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-server-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlet-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlets-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-util-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/bitnami/kafka/bin/../libs/jsr305-3.0.2.jar:/opt/bitnami/kafka/bin/../libs/kafka-clients-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-log4j-appender-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-examples-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-scala_2.11-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-test-utils-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-tools-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.11-2.3.0-sources.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.11-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/log4j-1.2.17.jar:/opt/bitnami/kafka/bin/../libs/lz4-java-1.6.0.jar:/opt/bitnami/kafka/bin/../libs/maven-artifact-3.6.1.jar:/opt/bitnami/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/bitnami/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/bitnami/kafka/bin/../libs/paranamer-2.8.jar:/opt/bitnami/kafka/bin/../libs/plexus-utils-3.2.0.jar:/opt/bitnami/kafka/bin/../libs/reflections-0.9.11.jar:/opt/bitnami/kafka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/bitnami/kafka/bin/../libs/scala-library-2.11.12.jar:/opt/bitnami/kafka/bin/../libs/scala-logging_2.11-3.9.0.jar:/opt/bitnami/kafka/bin/../libs/scala-reflect-2.11.12.jar:/opt/bitnami/kafka/bin/../libs/slf4j-api-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/slf4j-log4j12-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/bitnami/kafka/bin/../libs/spotbugs-annotations-3.1.9.jar:/opt/bitnami/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/opt/bitnami/kafka/bin/../libs/zkclient-0.11.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-3.4.14.jar:/opt/bitnami/kafka/bin/../libs/zstd-jni-1.4.0-1.jar (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,987] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,987] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,987] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,987] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,987] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,987] INFO Client environment:os.version=4.9.0-11-amd64 (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,987] INFO Client environment:user.name=? (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,988] INFO Client environment:user.home=? (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,989] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:19,990] INFO Initiating client connection, connectString=mothy-numbat-zookeeper sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@7c7b252e (org.apache.zookeeper.ZooKeeper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,010] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,024] INFO Opening socket connection to server mothy-numbat-zookeeper/10.98.35.187:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,045] INFO Socket connection established to mothy-numbat-zookeeper/10.98.35.187:2181, initiating session (org.apache.zookeeper.ClientCnxn)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,073] INFO Session establishment complete on server mothy-numbat-zookeeper/10.98.35.187:2181, sessionid = 0x10003a4f0e90000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,077] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,361] INFO Cluster ID = _gfcJfV_SWuWajLmUK48Tg (kafka.server.KafkaServer)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,363] WARN No meta.properties file under dir /bitnami/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,430] INFO KafkaConfig values:
mothy-numbat-kafka-0 kafka  advertised.host.name = null
mothy-numbat-kafka-0 kafka  advertised.listeners = PLAINTEXT://mothy-numbat-kafka-0.mothy-numbat-kafka-headless.default.svc.cluster.local:9092
mothy-numbat-kafka-0 kafka  advertised.port = null
mothy-numbat-kafka-0 kafka  alter.config.policy.class.name = null
mothy-numbat-kafka-0 kafka  alter.log.dirs.replication.quota.window.num = 11
mothy-numbat-kafka-0 kafka  alter.log.dirs.replication.quota.window.size.seconds = 1
mothy-numbat-kafka-0 kafka  authorizer.class.name =
mothy-numbat-kafka-0 kafka  auto.create.topics.enable = true
mothy-numbat-kafka-0 kafka  auto.leader.rebalance.enable = true
mothy-numbat-kafka-0 kafka  background.threads = 10
mothy-numbat-kafka-0 kafka  broker.id = -1
mothy-numbat-kafka-0 kafka  broker.id.generation.enable = true
mothy-numbat-kafka-0 kafka  broker.rack = null
mothy-numbat-kafka-0 kafka  client.quota.callback.class = null
mothy-numbat-kafka-0 kafka  compression.type = producer
mothy-numbat-kafka-0 kafka  connection.failed.authentication.delay.ms = 100
mothy-numbat-kafka-0 kafka  connections.max.idle.ms = 600000
mothy-numbat-kafka-0 kafka  connections.max.reauth.ms = 0
mothy-numbat-kafka-0 kafka  control.plane.listener.name = null
mothy-numbat-kafka-0 kafka  controlled.shutdown.enable = true
mothy-numbat-kafka-0 kafka  controlled.shutdown.max.retries = 3
mothy-numbat-kafka-0 kafka  controlled.shutdown.retry.backoff.ms = 5000
mothy-numbat-kafka-0 kafka  controller.socket.timeout.ms = 30000
mothy-numbat-kafka-0 kafka  create.topic.policy.class.name = null
mothy-numbat-kafka-0 kafka  default.replication.factor = 1
mothy-numbat-kafka-0 kafka  delegation.token.expiry.check.interval.ms = 3600000
mothy-numbat-kafka-0 kafka  delegation.token.expiry.time.ms = 86400000
mothy-numbat-kafka-0 kafka  delegation.token.master.key = null
mothy-numbat-kafka-0 kafka  delegation.token.max.lifetime.ms = 604800000
mothy-numbat-kafka-0 kafka  delete.records.purgatory.purge.interval.requests = 1
mothy-numbat-kafka-0 kafka  delete.topic.enable = false
mothy-numbat-kafka-0 kafka  fetch.purgatory.purge.interval.requests = 1000
mothy-numbat-kafka-0 kafka  group.initial.rebalance.delay.ms = 0
mothy-numbat-kafka-0 kafka  group.max.session.timeout.ms = 1800000
mothy-numbat-kafka-0 kafka  group.max.size = 2147483647
mothy-numbat-kafka-0 kafka  group.min.session.timeout.ms = 6000
mothy-numbat-kafka-0 kafka  host.name =
mothy-numbat-kafka-0 kafka  inter.broker.listener.name = null
mothy-numbat-kafka-0 kafka  inter.broker.protocol.version = 2.3-IV1
mothy-numbat-kafka-0 kafka  kafka.metrics.polling.interval.secs = 10
mothy-numbat-kafka-0 kafka  kafka.metrics.reporters = []
mothy-numbat-kafka-0 kafka  leader.imbalance.check.interval.seconds = 300
mothy-numbat-kafka-0 kafka  leader.imbalance.per.broker.percentage = 10
mothy-numbat-kafka-0 kafka  listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
mothy-numbat-kafka-0 kafka  listeners = PLAINTEXT://:9092
mothy-numbat-kafka-0 kafka  log.cleaner.backoff.ms = 15000
mothy-numbat-kafka-0 kafka  log.cleaner.dedupe.buffer.size = 134217728
mothy-numbat-kafka-0 kafka  log.cleaner.delete.retention.ms = 86400000
mothy-numbat-kafka-0 kafka  log.cleaner.enable = true
mothy-numbat-kafka-0 kafka  log.cleaner.io.buffer.load.factor = 0.9
mothy-numbat-kafka-0 kafka  log.cleaner.io.buffer.size = 524288
mothy-numbat-kafka-0 kafka  log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
mothy-numbat-kafka-0 kafka  log.cleaner.max.compaction.lag.ms = 9223372036854775807
mothy-numbat-kafka-0 kafka  log.cleaner.min.cleanable.ratio = 0.5
mothy-numbat-kafka-0 kafka  log.cleaner.min.compaction.lag.ms = 0
mothy-numbat-kafka-0 kafka  log.cleaner.threads = 1
mothy-numbat-kafka-0 kafka  log.cleanup.policy = [delete]
mothy-numbat-kafka-0 kafka  log.dir = /tmp/kafka-logs
mothy-numbat-kafka-0 kafka  log.dirs = /bitnami/kafka/data
mothy-numbat-kafka-0 kafka  log.flush.interval.messages = 10000
mothy-numbat-kafka-0 kafka  log.flush.interval.ms = 1000
mothy-numbat-kafka-0 kafka  log.flush.offset.checkpoint.interval.ms = 60000
mothy-numbat-kafka-0 kafka  log.flush.scheduler.interval.ms = 9223372036854775807
mothy-numbat-kafka-0 kafka  log.flush.start.offset.checkpoint.interval.ms = 60000
mothy-numbat-kafka-0 kafka  log.index.interval.bytes = 4096
mothy-numbat-kafka-0 kafka  log.index.size.max.bytes = 10485760
mothy-numbat-kafka-0 kafka  log.message.downconversion.enable = true
mothy-numbat-kafka-0 kafka  log.message.format.version = 2.3-IV1
mothy-numbat-kafka-0 kafka  log.message.timestamp.difference.max.ms = 9223372036854775807
mothy-numbat-kafka-0 kafka  log.message.timestamp.type = CreateTime
mothy-numbat-kafka-0 kafka  log.preallocate = false
mothy-numbat-kafka-0 kafka  log.retention.bytes = 1073741824
mothy-numbat-kafka-0 kafka  log.retention.check.interval.ms = 300000
mothy-numbat-kafka-0 kafka  log.retention.hours = 168
mothy-numbat-kafka-0 kafka  log.retention.minutes = null
mothy-numbat-kafka-0 kafka  log.retention.ms = null
mothy-numbat-kafka-0 kafka  log.roll.hours = 168
mothy-numbat-kafka-0 kafka  log.roll.jitter.hours = 0
mothy-numbat-kafka-0 kafka  log.roll.jitter.ms = null
mothy-numbat-kafka-0 kafka  log.roll.ms = null
mothy-numbat-kafka-0 kafka  log.segment.bytes = 1073741824
mothy-numbat-kafka-0 kafka  log.segment.delete.delay.ms = 60000
mothy-numbat-kafka-0 kafka  max.connections = 2147483647
mothy-numbat-kafka-0 kafka  max.connections.per.ip = 2147483647
mothy-numbat-kafka-0 kafka  max.connections.per.ip.overrides =
mothy-numbat-kafka-0 kafka  max.incremental.fetch.session.cache.slots = 1000
mothy-numbat-kafka-0 kafka  message.max.bytes = 1000012
mothy-numbat-kafka-0 kafka  metric.reporters = []
mothy-numbat-kafka-0 kafka  metrics.num.samples = 2
mothy-numbat-kafka-0 kafka  metrics.recording.level = INFO
mothy-numbat-kafka-0 kafka  metrics.sample.window.ms = 30000
mothy-numbat-kafka-0 kafka  min.insync.replicas = 1
mothy-numbat-kafka-0 kafka  num.io.threads = 8
mothy-numbat-kafka-0 kafka  num.network.threads = 3
mothy-numbat-kafka-0 kafka  num.partitions = 1
mothy-numbat-kafka-0 kafka  num.recovery.threads.per.data.dir = 1
mothy-numbat-kafka-0 kafka  num.replica.alter.log.dirs.threads = null
mothy-numbat-kafka-0 kafka  num.replica.fetchers = 1
mothy-numbat-kafka-0 kafka  offset.metadata.max.bytes = 4096
mothy-numbat-kafka-0 kafka  offsets.commit.required.acks = -1
mothy-numbat-kafka-0 kafka  offsets.commit.timeout.ms = 5000
mothy-numbat-kafka-0 kafka  offsets.load.buffer.size = 5242880
mothy-numbat-kafka-0 kafka  offsets.retention.check.interval.ms = 600000
mothy-numbat-kafka-0 kafka  offsets.retention.minutes = 10080
mothy-numbat-kafka-0 kafka  offsets.topic.compression.codec = 0
mothy-numbat-kafka-0 kafka  offsets.topic.num.partitions = 50
mothy-numbat-kafka-0 kafka  offsets.topic.replication.factor = 1
mothy-numbat-kafka-0 kafka  offsets.topic.segment.bytes = 104857600
mothy-numbat-kafka-0 kafka  password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
mothy-numbat-kafka-0 kafka  password.encoder.iterations = 4096
mothy-numbat-kafka-0 kafka  password.encoder.key.length = 128
mothy-numbat-kafka-0 kafka  password.encoder.keyfactory.algorithm = null
mothy-numbat-kafka-0 kafka  password.encoder.old.secret = null
mothy-numbat-kafka-0 kafka  password.encoder.secret = null
mothy-numbat-kafka-0 kafka  port = 9092
mothy-numbat-kafka-0 kafka  principal.builder.class = null
mothy-numbat-kafka-0 kafka  producer.purgatory.purge.interval.requests = 1000
mothy-numbat-kafka-0 kafka  queued.max.request.bytes = -1
mothy-numbat-kafka-0 kafka  queued.max.requests = 500
mothy-numbat-kafka-0 kafka  quota.consumer.default = 9223372036854775807
mothy-numbat-kafka-0 kafka  quota.producer.default = 9223372036854775807
mothy-numbat-kafka-0 kafka  quota.window.num = 11
mothy-numbat-kafka-0 kafka  quota.window.size.seconds = 1
mothy-numbat-kafka-0 kafka  replica.fetch.backoff.ms = 1000
mothy-numbat-kafka-0 kafka  replica.fetch.max.bytes = 1048576
mothy-numbat-kafka-0 kafka  replica.fetch.min.bytes = 1
mothy-numbat-kafka-0 kafka  replica.fetch.response.max.bytes = 10485760
mothy-numbat-kafka-0 kafka  replica.fetch.wait.max.ms = 500
mothy-numbat-kafka-0 kafka  replica.high.watermark.checkpoint.interval.ms = 5000
mothy-numbat-kafka-0 kafka  replica.lag.time.max.ms = 10000
mothy-numbat-kafka-0 kafka  replica.socket.receive.buffer.bytes = 65536
mothy-numbat-kafka-0 kafka  replica.socket.timeout.ms = 30000
mothy-numbat-kafka-0 kafka  replication.quota.window.num = 11
mothy-numbat-kafka-0 kafka  replication.quota.window.size.seconds = 1
mothy-numbat-kafka-0 kafka  request.timeout.ms = 30000
mothy-numbat-kafka-0 kafka  reserved.broker.max.id = 1000
mothy-numbat-kafka-0 kafka  sasl.client.callback.handler.class = null
mothy-numbat-kafka-0 kafka  sasl.enabled.mechanisms = [GSSAPI]
mothy-numbat-kafka-0 kafka  sasl.jaas.config = null
mothy-numbat-kafka-0 kafka  sasl.kerberos.kinit.cmd = /usr/bin/kinit
mothy-numbat-kafka-0 kafka  sasl.kerberos.min.time.before.relogin = 60000
mothy-numbat-kafka-0 kafka  sasl.kerberos.principal.to.local.rules = [DEFAULT]
mothy-numbat-kafka-0 kafka  sasl.kerberos.service.name = null
mothy-numbat-kafka-0 kafka  sasl.kerberos.ticket.renew.jitter = 0.05
mothy-numbat-kafka-0 kafka  sasl.kerberos.ticket.renew.window.factor = 0.8
mothy-numbat-kafka-0 kafka  sasl.login.callback.handler.class = null
mothy-numbat-kafka-0 kafka  sasl.login.class = null
mothy-numbat-kafka-0 kafka  sasl.login.refresh.buffer.seconds = 300
mothy-numbat-kafka-0 kafka  sasl.login.refresh.min.period.seconds = 60
mothy-numbat-kafka-0 kafka  sasl.login.refresh.window.factor = 0.8
mothy-numbat-kafka-0 kafka  sasl.login.refresh.window.jitter = 0.05
mothy-numbat-kafka-0 kafka  sasl.mechanism.inter.broker.protocol = GSSAPI
mothy-numbat-kafka-0 kafka  sasl.server.callback.handler.class = null
mothy-numbat-kafka-0 kafka  security.inter.broker.protocol = PLAINTEXT
mothy-numbat-kafka-0 kafka  socket.receive.buffer.bytes = 102400
mothy-numbat-kafka-0 kafka  socket.request.max.bytes = 104857600
mothy-numbat-kafka-0 kafka  socket.send.buffer.bytes = 102400
mothy-numbat-kafka-0 kafka  ssl.cipher.suites = []
mothy-numbat-kafka-0 kafka  ssl.client.auth = none
mothy-numbat-kafka-0 kafka  ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
mothy-numbat-kafka-0 kafka  ssl.endpoint.identification.algorithm = https
mothy-numbat-kafka-0 kafka  ssl.key.password = null
mothy-numbat-kafka-0 kafka  ssl.keymanager.algorithm = SunX509
mothy-numbat-kafka-0 kafka  ssl.keystore.location = null
mothy-numbat-kafka-0 kafka  ssl.keystore.password = null
mothy-numbat-kafka-0 kafka  ssl.keystore.type = JKS
mothy-numbat-kafka-0 kafka  ssl.principal.mapping.rules = [DEFAULT]
mothy-numbat-kafka-0 kafka  ssl.protocol = TLS
mothy-numbat-kafka-0 kafka  ssl.provider = null
mothy-numbat-kafka-0 kafka  ssl.secure.random.implementation = null
mothy-numbat-kafka-0 kafka  ssl.trustmanager.algorithm = PKIX
mothy-numbat-kafka-0 kafka  ssl.truststore.location = null
mothy-numbat-kafka-0 kafka  ssl.truststore.password = null
mothy-numbat-kafka-0 kafka  ssl.truststore.type = JKS
mothy-numbat-kafka-0 kafka  transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
mothy-numbat-kafka-0 kafka  transaction.max.timeout.ms = 900000
mothy-numbat-kafka-0 kafka  transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
mothy-numbat-kafka-0 kafka  transaction.state.log.load.buffer.size = 5242880
mothy-numbat-kafka-0 kafka  transaction.state.log.min.isr = 1
mothy-numbat-kafka-0 kafka  transaction.state.log.num.partitions = 50
mothy-numbat-kafka-0 kafka  transaction.state.log.replication.factor = 1
mothy-numbat-kafka-0 kafka  transaction.state.log.segment.bytes = 104857600
mothy-numbat-kafka-0 kafka  transactional.id.expiration.ms = 604800000
mothy-numbat-kafka-0 kafka  unclean.leader.election.enable = false
mothy-numbat-kafka-0 kafka  zookeeper.connect = mothy-numbat-zookeeper
mothy-numbat-kafka-0 kafka  zookeeper.connection.timeout.ms = 6000
mothy-numbat-kafka-0 kafka  zookeeper.max.in.flight.requests = 10
mothy-numbat-kafka-0 kafka  zookeeper.session.timeout.ms = 6000
mothy-numbat-kafka-0 kafka  zookeeper.set.acl = false
mothy-numbat-kafka-0 kafka  zookeeper.sync.time.ms = 2000
mothy-numbat-kafka-0 kafka  (kafka.server.KafkaConfig)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,447] INFO KafkaConfig values:
mothy-numbat-kafka-0 kafka  advertised.host.name = null
mothy-numbat-kafka-0 kafka  advertised.listeners = PLAINTEXT://mothy-numbat-kafka-0.mothy-numbat-kafka-headless.default.svc.cluster.local:9092
mothy-numbat-kafka-0 kafka  advertised.port = null
mothy-numbat-kafka-0 kafka  alter.config.policy.class.name = null
mothy-numbat-kafka-0 kafka  alter.log.dirs.replication.quota.window.num = 11
mothy-numbat-kafka-0 kafka  alter.log.dirs.replication.quota.window.size.seconds = 1
mothy-numbat-kafka-0 kafka  authorizer.class.name =
mothy-numbat-kafka-0 kafka  auto.create.topics.enable = true
mothy-numbat-kafka-0 kafka  auto.leader.rebalance.enable = true
mothy-numbat-kafka-0 kafka  background.threads = 10
mothy-numbat-kafka-0 kafka  broker.id = -1
mothy-numbat-kafka-0 kafka  broker.id.generation.enable = true
mothy-numbat-kafka-0 kafka  broker.rack = null
mothy-numbat-kafka-0 kafka  client.quota.callback.class = null
mothy-numbat-kafka-0 kafka  compression.type = producer
mothy-numbat-kafka-0 kafka  connection.failed.authentication.delay.ms = 100
mothy-numbat-kafka-0 kafka  connections.max.idle.ms = 600000
mothy-numbat-kafka-0 kafka  connections.max.reauth.ms = 0
mothy-numbat-kafka-0 kafka  control.plane.listener.name = null
mothy-numbat-kafka-0 kafka  controlled.shutdown.enable = true
mothy-numbat-kafka-0 kafka  controlled.shutdown.max.retries = 3
mothy-numbat-kafka-0 kafka  controlled.shutdown.retry.backoff.ms = 5000
mothy-numbat-kafka-0 kafka  controller.socket.timeout.ms = 30000
mothy-numbat-kafka-0 kafka  create.topic.policy.class.name = null
mothy-numbat-kafka-0 kafka  default.replication.factor = 1
mothy-numbat-kafka-0 kafka  delegation.token.expiry.check.interval.ms = 3600000
mothy-numbat-kafka-0 kafka  delegation.token.expiry.time.ms = 86400000
mothy-numbat-kafka-0 kafka  delegation.token.master.key = null
mothy-numbat-kafka-0 kafka  delegation.token.max.lifetime.ms = 604800000
mothy-numbat-kafka-0 kafka  delete.records.purgatory.purge.interval.requests = 1
mothy-numbat-kafka-0 kafka  delete.topic.enable = false
mothy-numbat-kafka-0 kafka  fetch.purgatory.purge.interval.requests = 1000
mothy-numbat-kafka-0 kafka  group.initial.rebalance.delay.ms = 0
mothy-numbat-kafka-0 kafka  group.max.session.timeout.ms = 1800000
mothy-numbat-kafka-0 kafka  group.max.size = 2147483647
mothy-numbat-kafka-0 kafka  group.min.session.timeout.ms = 6000
mothy-numbat-kafka-0 kafka  host.name =
mothy-numbat-kafka-0 kafka  inter.broker.listener.name = null
mothy-numbat-kafka-0 kafka  inter.broker.protocol.version = 2.3-IV1
mothy-numbat-kafka-0 kafka  kafka.metrics.polling.interval.secs = 10
mothy-numbat-kafka-0 kafka  kafka.metrics.reporters = []
mothy-numbat-kafka-0 kafka  leader.imbalance.check.interval.seconds = 300
mothy-numbat-kafka-0 kafka  leader.imbalance.per.broker.percentage = 10
mothy-numbat-kafka-0 kafka  listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
mothy-numbat-kafka-0 kafka  listeners = PLAINTEXT://:9092
mothy-numbat-kafka-0 kafka  log.cleaner.backoff.ms = 15000
mothy-numbat-kafka-0 kafka  log.cleaner.dedupe.buffer.size = 134217728
mothy-numbat-kafka-0 kafka  log.cleaner.delete.retention.ms = 86400000
mothy-numbat-kafka-0 kafka  log.cleaner.enable = true
mothy-numbat-kafka-0 kafka  log.cleaner.io.buffer.load.factor = 0.9
mothy-numbat-kafka-0 kafka  log.cleaner.io.buffer.size = 524288
mothy-numbat-kafka-0 kafka  log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
mothy-numbat-kafka-0 kafka  log.cleaner.max.compaction.lag.ms = 9223372036854775807
mothy-numbat-kafka-0 kafka  log.cleaner.min.cleanable.ratio = 0.5
mothy-numbat-kafka-0 kafka  log.cleaner.min.compaction.lag.ms = 0
mothy-numbat-kafka-0 kafka  log.cleaner.threads = 1
mothy-numbat-kafka-0 kafka  log.cleanup.policy = [delete]
mothy-numbat-kafka-0 kafka  log.dir = /tmp/kafka-logs
mothy-numbat-kafka-0 kafka  log.dirs = /bitnami/kafka/data
mothy-numbat-kafka-0 kafka  log.flush.interval.messages = 10000
mothy-numbat-kafka-0 kafka  log.flush.interval.ms = 1000
mothy-numbat-kafka-0 kafka  log.flush.offset.checkpoint.interval.ms = 60000
mothy-numbat-kafka-0 kafka  log.flush.scheduler.interval.ms = 9223372036854775807
mothy-numbat-kafka-0 kafka  log.flush.start.offset.checkpoint.interval.ms = 60000
mothy-numbat-kafka-0 kafka  log.index.interval.bytes = 4096
mothy-numbat-kafka-0 kafka  log.index.size.max.bytes = 10485760
mothy-numbat-kafka-0 kafka  log.message.downconversion.enable = true
mothy-numbat-kafka-0 kafka  log.message.format.version = 2.3-IV1
mothy-numbat-kafka-0 kafka  log.message.timestamp.difference.max.ms = 9223372036854775807
mothy-numbat-kafka-0 kafka  log.message.timestamp.type = CreateTime
mothy-numbat-kafka-0 kafka  log.preallocate = false
mothy-numbat-kafka-0 kafka  log.retention.bytes = 1073741824
mothy-numbat-kafka-0 kafka  log.retention.check.interval.ms = 300000
mothy-numbat-kafka-0 kafka  log.retention.hours = 168
mothy-numbat-kafka-0 kafka  log.retention.minutes = null
mothy-numbat-kafka-0 kafka  log.retention.ms = null
mothy-numbat-kafka-0 kafka  log.roll.hours = 168
mothy-numbat-kafka-0 kafka  log.roll.jitter.hours = 0
mothy-numbat-kafka-0 kafka  log.roll.jitter.ms = null
mothy-numbat-kafka-0 kafka  log.roll.ms = null
mothy-numbat-kafka-0 kafka  log.segment.bytes = 1073741824
mothy-numbat-kafka-0 kafka  log.segment.delete.delay.ms = 60000
mothy-numbat-kafka-0 kafka  max.connections = 2147483647
mothy-numbat-kafka-0 kafka  max.connections.per.ip = 2147483647
mothy-numbat-kafka-0 kafka  max.connections.per.ip.overrides =
mothy-numbat-kafka-0 kafka  max.incremental.fetch.session.cache.slots = 1000
mothy-numbat-kafka-0 kafka  message.max.bytes = 1000012
mothy-numbat-kafka-0 kafka  metric.reporters = []
mothy-numbat-kafka-0 kafka  metrics.num.samples = 2
mothy-numbat-kafka-0 kafka  metrics.recording.level = INFO
mothy-numbat-kafka-0 kafka  metrics.sample.window.ms = 30000
mothy-numbat-kafka-0 kafka  min.insync.replicas = 1
mothy-numbat-kafka-0 kafka  num.io.threads = 8
mothy-numbat-kafka-0 kafka  num.network.threads = 3
mothy-numbat-kafka-0 kafka  num.partitions = 1
mothy-numbat-kafka-0 kafka  num.recovery.threads.per.data.dir = 1
mothy-numbat-kafka-0 kafka  num.replica.alter.log.dirs.threads = null
mothy-numbat-kafka-0 kafka  num.replica.fetchers = 1
mothy-numbat-kafka-0 kafka  offset.metadata.max.bytes = 4096
mothy-numbat-kafka-0 kafka  offsets.commit.required.acks = -1
mothy-numbat-kafka-0 kafka  offsets.commit.timeout.ms = 5000
mothy-numbat-kafka-0 kafka  offsets.load.buffer.size = 5242880
mothy-numbat-kafka-0 kafka  offsets.retention.check.interval.ms = 600000
mothy-numbat-kafka-0 kafka  offsets.retention.minutes = 10080
mothy-numbat-kafka-0 kafka  offsets.topic.compression.codec = 0
mothy-numbat-kafka-0 kafka  offsets.topic.num.partitions = 50
mothy-numbat-kafka-0 kafka  offsets.topic.replication.factor = 1
mothy-numbat-kafka-0 kafka  offsets.topic.segment.bytes = 104857600
mothy-numbat-kafka-0 kafka  password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
mothy-numbat-kafka-0 kafka  password.encoder.iterations = 4096
mothy-numbat-kafka-0 kafka  password.encoder.key.length = 128
mothy-numbat-kafka-0 kafka  password.encoder.keyfactory.algorithm = null
mothy-numbat-kafka-0 kafka  password.encoder.old.secret = null
mothy-numbat-kafka-0 kafka  password.encoder.secret = null
mothy-numbat-kafka-0 kafka  port = 9092
mothy-numbat-kafka-0 kafka  principal.builder.class = null
mothy-numbat-kafka-0 kafka  producer.purgatory.purge.interval.requests = 1000
mothy-numbat-kafka-0 kafka  queued.max.request.bytes = -1
mothy-numbat-kafka-0 kafka  queued.max.requests = 500
mothy-numbat-kafka-0 kafka  quota.consumer.default = 9223372036854775807
mothy-numbat-kafka-0 kafka  quota.producer.default = 9223372036854775807
mothy-numbat-kafka-0 kafka  quota.window.num = 11
mothy-numbat-kafka-0 kafka  quota.window.size.seconds = 1
mothy-numbat-kafka-0 kafka  replica.fetch.backoff.ms = 1000
mothy-numbat-kafka-0 kafka  replica.fetch.max.bytes = 1048576
mothy-numbat-kafka-0 kafka  replica.fetch.min.bytes = 1
mothy-numbat-kafka-0 kafka  replica.fetch.response.max.bytes = 10485760
mothy-numbat-kafka-0 kafka  replica.fetch.wait.max.ms = 500
mothy-numbat-kafka-0 kafka  replica.high.watermark.checkpoint.interval.ms = 5000
mothy-numbat-kafka-0 kafka  replica.lag.time.max.ms = 10000
mothy-numbat-kafka-0 kafka  replica.socket.receive.buffer.bytes = 65536
mothy-numbat-kafka-0 kafka  replica.socket.timeout.ms = 30000
mothy-numbat-kafka-0 kafka  replication.quota.window.num = 11
mothy-numbat-kafka-0 kafka  replication.quota.window.size.seconds = 1
mothy-numbat-kafka-0 kafka  request.timeout.ms = 30000
mothy-numbat-kafka-0 kafka  reserved.broker.max.id = 1000
mothy-numbat-kafka-0 kafka  sasl.client.callback.handler.class = null
mothy-numbat-kafka-0 kafka  sasl.enabled.mechanisms = [GSSAPI]
mothy-numbat-kafka-0 kafka  sasl.jaas.config = null
mothy-numbat-kafka-0 kafka  sasl.kerberos.kinit.cmd = /usr/bin/kinit
mothy-numbat-kafka-0 kafka  sasl.kerberos.min.time.before.relogin = 60000
mothy-numbat-kafka-0 kafka  sasl.kerberos.principal.to.local.rules = [DEFAULT]
mothy-numbat-kafka-0 kafka  sasl.kerberos.service.name = null
mothy-numbat-kafka-0 kafka  sasl.kerberos.ticket.renew.jitter = 0.05
mothy-numbat-kafka-0 kafka  sasl.kerberos.ticket.renew.window.factor = 0.8
mothy-numbat-kafka-0 kafka  sasl.login.callback.handler.class = null
mothy-numbat-kafka-0 kafka  sasl.login.class = null
mothy-numbat-kafka-0 kafka  sasl.login.refresh.buffer.seconds = 300
mothy-numbat-kafka-0 kafka  sasl.login.refresh.min.period.seconds = 60
mothy-numbat-kafka-0 kafka  sasl.login.refresh.window.factor = 0.8
mothy-numbat-kafka-0 kafka  sasl.login.refresh.window.jitter = 0.05
mothy-numbat-kafka-0 kafka  sasl.mechanism.inter.broker.protocol = GSSAPI
mothy-numbat-kafka-0 kafka  sasl.server.callback.handler.class = null
mothy-numbat-kafka-0 kafka  security.inter.broker.protocol = PLAINTEXT
mothy-numbat-kafka-0 kafka  socket.receive.buffer.bytes = 102400
mothy-numbat-kafka-0 kafka  socket.request.max.bytes = 104857600
mothy-numbat-kafka-0 kafka  socket.send.buffer.bytes = 102400
mothy-numbat-kafka-0 kafka  ssl.cipher.suites = []
mothy-numbat-kafka-0 kafka  ssl.client.auth = none
mothy-numbat-kafka-0 kafka  ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
mothy-numbat-kafka-0 kafka  ssl.endpoint.identification.algorithm = https
mothy-numbat-kafka-0 kafka  ssl.key.password = null
mothy-numbat-kafka-0 kafka  ssl.keymanager.algorithm = SunX509
mothy-numbat-kafka-0 kafka  ssl.keystore.location = null
mothy-numbat-kafka-0 kafka  ssl.keystore.password = null
mothy-numbat-kafka-0 kafka  ssl.keystore.type = JKS
mothy-numbat-kafka-0 kafka  ssl.principal.mapping.rules = [DEFAULT]
mothy-numbat-kafka-0 kafka  ssl.protocol = TLS
mothy-numbat-kafka-0 kafka  ssl.provider = null
mothy-numbat-kafka-0 kafka  ssl.secure.random.implementation = null
mothy-numbat-kafka-0 kafka  ssl.trustmanager.algorithm = PKIX
mothy-numbat-kafka-0 kafka  ssl.truststore.location = null
mothy-numbat-kafka-0 kafka  ssl.truststore.password = null
mothy-numbat-kafka-0 kafka  ssl.truststore.type = JKS
mothy-numbat-kafka-0 kafka  transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
mothy-numbat-kafka-0 kafka  transaction.max.timeout.ms = 900000
mothy-numbat-kafka-0 kafka  transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
mothy-numbat-kafka-0 kafka  transaction.state.log.load.buffer.size = 5242880
mothy-numbat-kafka-0 kafka  transaction.state.log.min.isr = 1
mothy-numbat-kafka-0 kafka  transaction.state.log.num.partitions = 50
mothy-numbat-kafka-0 kafka  transaction.state.log.replication.factor = 1
mothy-numbat-kafka-0 kafka  transaction.state.log.segment.bytes = 104857600
mothy-numbat-kafka-0 kafka  transactional.id.expiration.ms = 604800000
mothy-numbat-kafka-0 kafka  unclean.leader.election.enable = false
mothy-numbat-kafka-0 kafka  zookeeper.connect = mothy-numbat-zookeeper
mothy-numbat-kafka-0 kafka  zookeeper.connection.timeout.ms = 6000
mothy-numbat-kafka-0 kafka  zookeeper.max.in.flight.requests = 10
mothy-numbat-kafka-0 kafka  zookeeper.session.timeout.ms = 6000
mothy-numbat-kafka-0 kafka  zookeeper.set.acl = false
mothy-numbat-kafka-0 kafka  zookeeper.sync.time.ms = 2000
mothy-numbat-kafka-0 kafka  (kafka.server.KafkaConfig)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,477] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,477] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,480] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,509] INFO Loading logs. (kafka.log.LogManager)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,515] INFO Logs loading complete in 6 ms. (kafka.log.LogManager)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,527] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,528] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,902] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,942] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,944] INFO [SocketServer brokerId=1001] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,965] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,967] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,967] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,969] INFO [ExpirationReaper-1001-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:20,983] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,014] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,038] INFO Stat of the created znode at /brokers/ids/1001 is: 25,25,1571991441029,1571991441029,1,0,0,72061600989249536,320,0,25
mothy-numbat-kafka-0 kafka  (kafka.zk.KafkaZkClient)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,039] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(mothy-numbat-kafka-0.mothy-numbat-kafka-headless.default.svc.cluster.local,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 25 (kafka.zk.KafkaZkClient)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,040] WARN No meta.properties file under dir /bitnami/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,089] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,091] INFO [ExpirationReaper-1001-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,093] INFO [ExpirationReaper-1001-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,107] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,109] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,110] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,112] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 2 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,129] INFO [ProducerId Manager 1001]: Acquired new producerId block (brokerId:1001,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,153] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,155] INFO [Transaction Marker Channel Manager 1001]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,156] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,186] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,204] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,237] INFO Kafka version: 2.3.0 (org.apache.kafka.common.utils.AppInfoParser)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,239] INFO Kafka commitId: fc1aaa116b661c8a (org.apache.kafka.common.utils.AppInfoParser)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,240] INFO Kafka startTimeMs: 1571991441206 (org.apache.kafka.common.utils.AppInfoParser)
mothy-numbat-kafka-0 kafka [2019-10-25 08:17:21,244] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)
ghost commented 5 years ago

My issue has nothing to do with whether the Kafka pods become ready or not. Did you run through my steps to reproduce? I have a helm install command that sets the variable I am talking about (you might need to add --version 6.1.2 to be concise) and I have listed commands to exec you into the pod and navigate to the directory containing the console commands. It looks like all you've done here is dump the kafka logs?

javsalgar commented 5 years ago

Sorry, I thought that the readiness probe of the container would fail. I was able to reproduce the issue. I think that we should default the JMX_PORT to another one that does not cause conflicts. In the meantime, I was able to make it work by doing this

I have no name!@kafka-0:/$ export JMX_PORT=5557
I have no name!@kafka-0:/$ kafka-topics.sh --bootstrap-server localhost:9092 --list
ghost commented 5 years ago

Ah gotcha. Interestingly setting the jmxPort value for the chart to a non-default value doesn't work.

We have also attempted to change jmxPort to something other that the default port, but no dice.

For example:

helm install -n kafka --namespace kafka bitnami/kafka --set metrics.jmx.enabled=true --set metrics.jmx.jmxPort=5557

Noted that you can modify the port within the container in a number of a ways as a workaround, but none of which are ideal. I believe you can also do

unset JMX_PORT

or

JMX_PORT= ./kafka-topics.sh --bootstrap-server ......
benedikt-haug commented 4 years ago

Just had the same issue.

javsalgar commented 4 years ago

Hi @gna582. Did the workaround shown here work for you?

benedikt-haug commented 4 years ago

yes, it worked:

kubectl exec -ti kafka-6 -n kafka -c kafka-broker -- /bin/bash -c 'unset JMX_PORT; kafka-configs ...
javsalgar commented 4 years ago

Ok! Good to know! We will update the ticket when we have more news about the fix

navneet066 commented 4 years ago

We are also facing the same issue.

Error: JMX connector server communication error: service:jmx:rmi://kafka-jmx-2:5556
jdk.internal.agent.AgentConfigurationError: java.rmi.server.ExportException: Port already in use: 5556; nested exception is: 
    java.net.BindException: Address already in use (Bind failed)
    at jdk.management.agent/sun.management.jmxremote.ConnectorBootstrap.exportMBeanServer(ConnectorBootstrap.java:820)
    at jdk.management.agent/sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(ConnectorBootstrap.java:479)
    at jdk.management.agent/jdk.internal.agent.Agent.startAgent(Agent.java:447)
    at jdk.management.agent/jdk.internal.agent.Agent.startAgent(Agent.java:599)
Caused by: java.rmi.server.ExportException: Port already in use: 5556; nested exception is: 
    java.net.BindException: Address already in use (Bind failed)
    at java.rmi/sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:335)
    at java.rmi/sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:243)
    at java.rmi/sun.rmi.transport.tcp.TCPEndpoint.exportObject(TCPEndpoint.java:412)
    at java.rmi/sun.rmi.transport.LiveRef.exportObject(LiveRef.java:147)
    at java.rmi/sun.rmi.server.UnicastServerRef.exportObject(UnicastServerRef.java:234)
    at jdk.management.agent/sun.management.jmxremote.ConnectorBootstrap$PermanentExporter.exportObject(ConnectorBootstrap.java:203)
    at java.management.rmi/javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:153)
    at java.management.rmi/javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:138)
    at java.management.rmi/javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:473)
    at jdk.management.agent/sun.management.jmxremote.ConnectorBootstrap.exportMBeanServer(ConnectorBootstrap.java:816)
    ... 3 more
Caused by: java.net.BindException: Address already in use (Bind failed)
    at java.base/java.net.PlainSocketImpl.socketBind(Native Method)
    at java.base/java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:436)
    at java.base/java.net.ServerSocket.bind(ServerSocket.java:395)
    at java.base/java.net.ServerSocket.<init>(ServerSocket.java:257)
    at java.base/java.net.ServerSocket.<init>(ServerSocket.java:149)
    at java.rmi/sun.rmi.transport.tcp.TCPDirectSocketFactory.createServerSocket(TCPDirectSocketFactory.java:45)
    at java.rmi/sun.rmi.transport.tcp.TCPEndpoint.newServerSocket(TCPEndpoint.java:670)
    at java.rmi/sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:324)

Any resolution for this?

javsalgar commented 4 years ago

Hi,

I'm afraid it is still in our backlog. Did the workaround shown here work for you? https://github.com/bitnami/charts/issues/1522#issuecomment-547315624

domq commented 2 years ago

I have the same problem with the KAFKA_OPTS environment variable, which in my rig is set to -javaagent:/usr/app/jmx_prometheus_javaagent.jar=9000:/dev/null. The workaround that works for me is

env KAFKA_OPTS= kafka-acls

To summarize, there appears to be a number of ways to set environment variables intended for the “main” Kafka process, which end up causing trouble when running shell commands inside the same container. Perhaps the docs should recommend a more targeted pathway to get options into the Kafka broker, and nowhere else?

rafariossaa commented 2 years ago

Hi, I am happy the workaround worked for you.

Do you mean the chart documentation ? If so, please feel free to send a PR to add the instructions you consider to the README.

carrodher commented 2 years ago

Unfortunately, this issue was created a long time ago and although there is an internal task to fix it, it was not prioritized as something to address in the short/mid term. It's not a technical reason but something related to the capacity since we're a small team.

Being said that, contributions via PRs are more than welcome in both repositories (containers and charts). Just in case you would like to contribute.

During this time, there are several releases of this asset and it's possible the issue has gone as part of other changes. If that's not the case and you are still experiencing this issue, please feel free to reopen it and we will re-evaluate it.