bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.86k stars 9.15k forks source link

No kafka-server-start.sh file #9370

Closed UmfintechWtc closed 2 years ago

UmfintechWtc commented 2 years ago

Name and Version

bitnami/kafka:2.8.1-debian-10-r0

What steps will reproduce the bug?

helm3 install kafka, exception information: kafka 10:44:53.30 INFO ==> Kafka setup finished! kafka 10:44:53.32 INFO ==> Starting Kafka error: exec: "/opt/bitnami/kafka/bin/kafka-server-start.sh": stat /opt/bitnami/kafka/bin/kafka-server-start.sh: no such file or directory what should i do

Are you using any custom parameters or values?

No response

What is the expected behavior?

No response

What do you see instead?

kafka 10:44:53.30 INFO ==> Kafka setup finished! kafka 10:44:53.32 INFO ==> Starting Kafka error: exec: "/opt/bitnami/kafka/bin/kafka-server-start.sh": stat /opt/bitnami/kafka/bin/kafka-server-start.sh: no such file or directory

Additional information

No response

javsalgar commented 2 years ago

Hi,

I was unable to reproduce the issue:

❯ helm install kafka bitnami/kafka --set image.tag=2
NAME: kafka
LAST DEPLOYED: Fri Mar 11 11:53:34 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 15.3.6
APP VERSION: 3.1.0

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka.default.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-0.kafka-headless.default.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:2 --namespace default --command -- sleep infinity
    kubectl exec --tty -i kafka-client --namespace default -- bash

    PRODUCER:
        kafka-console-producer.sh \

            --broker-list kafka-0.kafka-headless.default.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \

            --bootstrap-server kafka.default.svc.cluster.local:9092 \
            --topic test \
            --from-beginning
WARNING: Rolling tag detected (bitnami/kafka:2), please note that it is strongly recommended to avoid using rolling tags in a production environment.
+info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/
 ~                                                                                11:53:34  jsalmeron
❯ kubectl get pods -w
NAME                                         READY   STATUS              RESTARTS   AGE
kafka-0                                      0/1     ContainerCreating   0          5s
kafka-zookeeper-0                            0/1     ContainerCreating   0          5s
psql-postgresql-ha-pgpool-54db9569b4-gjjxf   1/1     Running             0          57m
psql-postgresql-ha-postgresql-0              1/1     Running             0          57m
psql-postgresql-ha-postgresql-1              1/1     Running             0          57m
psql-postgresql-ha-postgresql-2              1/1     Running             0          57m
^C ~                                                                              11:53:40  jsalmeron
 1 ❯ kubectl get pods -w
NAME                                         READY   STATUS              RESTARTS   AGE
kafka-0                                      0/1     ContainerCreating   0          17s
kafka-zookeeper-0                            0/1     ContainerCreating   0          17s
psql-postgresql-ha-pgpool-54db9569b4-gjjxf   1/1     Running             0          57m
psql-postgresql-ha-postgresql-0              1/1     Running             0          57m
psql-postgresql-ha-postgresql-1              1/1     Running             0          57m
psql-postgresql-ha-postgresql-2              1/1     Running             0          57m
kafka-zookeeper-0                            0/1     Running             0          21s
^C ~                                                                              11:53:58  jsalmeron
 1 ❯ kubectl get pods -w
NAME                                         READY   STATUS              RESTARTS   AGE
kafka-0                                      0/1     ContainerCreating   0          25s
kafka-zookeeper-0                            0/1     Running             0          25s
psql-postgresql-ha-pgpool-54db9569b4-gjjxf   1/1     Running             0          57m
psql-postgresql-ha-postgresql-0              1/1     Running             0          57m
psql-postgresql-ha-postgresql-1              1/1     Running             0          57m
psql-postgresql-ha-postgresql-2              1/1     Running             0          57m
^C ~                                                                              11:54:02  jsalmeron
 1 ❯ kubectl get pods -w
NAME                                         READY   STATUS              RESTARTS   AGE
kafka-0                                      0/1     ContainerCreating   0          29s
kafka-zookeeper-0                            0/1     Running             0          29s
psql-postgresql-ha-pgpool-54db9569b4-gjjxf   1/1     Running             0          57m
psql-postgresql-ha-postgresql-0              1/1     Running             0          57m
psql-postgresql-ha-postgresql-1              1/1     Running             0          57m
psql-postgresql-ha-postgresql-2              1/1     Running             0          57m
kafka-zookeeper-0                            1/1     Running             0          32s
kafka-0                                      0/1     Running             0          42s
kafka-0                                      1/1     Running             0          52s
^C ~                                                                              11:55:42  jsalmeron
 1 ❯ kubectl get pods
NAME                                         READY   STATUS    RESTARTS   AGE
kafka-0                                      1/1     Running   0          2m11s
kafka-zookeeper-0                            1/1     Running   0          2m11s
psql-postgresql-ha-pgpool-54db9569b4-gjjxf   1/1     Running   0          59m
psql-postgresql-ha-postgresql-0              1/1     Running   0          59m
psql-postgresql-ha-postgresql-1              1/1     Running   0          59m
psql-postgresql-ha-postgresql-2              1/1     Running   0          59m
 ~                                                                                11:55:45  jsalmeron
❯ kubectl logs kafka-0
kafka 10:54:15.21
kafka 10:54:15.21 Welcome to the Bitnami kafka container
kafka 10:54:15.21 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-kafka
kafka 10:54:15.21 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-kafka/issues
kafka 10:54:15.21
kafka 10:54:15.21 INFO  ==> ** Starting Kafka setup **
kafka 10:54:15.29 WARN  ==> You set the environment variable ALLOW_PLAINTEXT_LISTENER=yes. For safety reasons, do not use this flag in a production environment.
kafka 10:54:15.30 INFO  ==> Initializing Kafka...
kafka 10:54:15.31 INFO  ==> No injected configuration files found, creating default config files
kafka 10:54:15.54 INFO  ==> Configuring Kafka for inter-broker communications with PLAINTEXT authentication.
kafka 10:54:15.54 WARN  ==> Inter-broker communications are configured as PLAINTEXT. This is not safe for production environments.
kafka 10:54:15.55 INFO  ==> Configuring Kafka for client communications with PLAINTEXT authentication.
kafka 10:54:15.55 WARN  ==> Client communications are configured using PLAINTEXT listeners. For safety reasons, do not use this in a production environment.
kafka 10:54:15.56 INFO  ==> ** Kafka setup finished! **

kafka 10:54:15.58 INFO  ==> ** Starting Kafka **
[2022-03-11 10:54:16,358] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2022-03-11 10:54:16,848] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2022-03-11 10:54:16,941] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2022-03-11 10:54:16,944] INFO starting (kafka.server.KafkaServer)
[2022-03-11 10:54:16,945] INFO Connecting to zookeeper on kafka-zookeeper (kafka.server.KafkaServer)
[2022-03-11 10:54:16,967] INFO [ZooKeeperClient Kafka server] Initializing a new session to kafka-zookeeper. (kafka.zookeeper.ZooKeeperClient)
[2022-03-11 10:54:16,972] INFO Client environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 20:03 GMT (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,972] INFO Client environment:host.name=kafka-0.kafka-headless.default.svc.cluster.local (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,972] INFO Client environment:java.version=11.0.14 (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,972] INFO Client environment:java.vendor=BellSoft (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,972] INFO Client environment:java.home=/opt/bitnami/java (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,973] INFO Client environment:java.class.path=/opt/bitnami/kafka/bin/../libs/activation-1.1.1.jar:/opt/bitnami/kafka/bin/../libs/aopalliance-repackaged-2.6.1.jar:/opt/bitnami/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/bitnami/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/bitnami/kafka/bin/../libs/commons-cli-1.4.jar:/opt/bitnami/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-api-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-basic-auth-extension-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-file-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-json-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-mirror-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-mirror-client-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-runtime-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-transforms-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/hk2-api-2.6.1.jar:/opt/bitnami/kafka/bin/../libs/hk2-locator-2.6.1.jar:/opt/bitnami/kafka/bin/../libs/hk2-utils-2.6.1.jar:/opt/bitnami/kafka/bin/../libs/jackson-annotations-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-core-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-databind-2.10.5.1.jar:/opt/bitnami/kafka/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-paranamer-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-scala_2.12-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/bitnami/kafka/bin/../libs/jakarta.annotation-api-1.3.5.jar:/opt/bitnami/kafka/bin/../libs/jakarta.inject-2.6.1.jar:/opt/bitnami/kafka/bin/../libs/jakarta.validation-api-2.0.2.jar:/opt/bitnami/kafka/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/opt/bitnami/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/bitnami/kafka/bin/../libs/javassist-3.27.0-GA.jar:/opt/bitnami/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/bitnami/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/bitnami/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/jersey-client-2.34.jar:/opt/bitnami/kafka/bin/../libs/jersey-common-2.34.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-2.34.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-core-2.34.jar:/opt/bitnami/kafka/bin/../libs/jersey-hk2-2.34.jar:/opt/bitnami/kafka/bin/../libs/jersey-server-2.34.jar:/opt/bitnami/kafka/bin/../libs/jetty-client-9.4.43.v20210629.jar:/opt/bitnami/kafka/bin/../libs/jetty-continuation-9.4.43.v20210629.jar:/opt/bitnami/kafka/bin/../libs/jetty-http-9.4.43.v20210629.jar:/opt/bitnami/kafka/bin/../libs/jetty-io-9.4.43.v20210629.jar:/opt/bitnami/kafka/bin/../libs/jetty-security-9.4.43.v20210629.jar:/opt/bitnami/kafka/bin/../libs/jetty-server-9.4.43.v20210629.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlet-9.4.43.v20210629.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlets-9.4.43.v20210629.jar:/opt/bitnami/kafka/bin/../libs/jetty-util-9.4.43.v20210629.jar:/opt/bitnami/kafka/bin/../libs/jetty-util-ajax-9.4.43.v20210629.jar:/opt/bitnami/kafka/bin/../libs/jline-3.12.1.jar:/opt/bitnami/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/bitnami/kafka/bin/../libs/kafka-clients-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-log4j-appender-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-metadata-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-raft-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-shell-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-examples-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-scala_2.12-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-test-utils-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-tools-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.12-2.8.1-sources.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.12-2.8.1.jar:/opt/bitnami/kafka/bin/../libs/log4j-1.2.17.jar:/opt/bitnami/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/bitnami/kafka/bin/../libs/maven-artifact-3.8.1.jar:/opt/bitnami/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/bitnami/kafka/bin/../libs/netty-buffer-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-codec-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-common-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-handler-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-resolver-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-native-epoll-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-native-unix-common-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/osgi-resource-locator-1.0.3.jar:/opt/bitnami/kafka/bin/../libs/paranamer-2.8.jar:/opt/bitnami/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/bitnami/kafka/bin/../libs/reflections-0.9.12.jar:/opt/bitnami/kafka/bin/../libs/rocksdbjni-5.18.4.jar:/opt/bitnami/kafka/bin/../libs/scala-collection-compat_2.12-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/scala-java8-compat_2.12-0.9.1.jar:/opt/bitnami/kafka/bin/../libs/scala-library-2.12.13.jar:/opt/bitnami/kafka/bin/../libs/scala-logging_2.12-3.9.2.jar:/opt/bitnami/kafka/bin/../libs/scala-reflect-2.12.13.jar:/opt/bitnami/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/bitnami/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/bitnami/kafka/bin/../libs/snappy-java-1.1.8.1.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-3.5.9.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-jute-3.5.9.jar:/opt/bitnami/kafka/bin/../libs/zstd-jni-1.4.9-1.jar (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,973] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,973] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,973] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,973] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,973] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,973] INFO Client environment:os.version=4.19.182 (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,973] INFO Client environment:user.name=? (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,973] INFO Client environment:user.home=? (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,973] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,974] INFO Client environment:os.memory.free=1011MB (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,974] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,974] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,976] INFO Initiating client connection, connectString=kafka-zookeeper sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@56f0cc85 (org.apache.zookeeper.ZooKeeper)
[2022-03-11 10:54:16,980] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2022-03-11 10:54:16,985] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2022-03-11 10:54:16,987] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2022-03-11 10:54:16,994] INFO Opening socket connection to server kafka-zookeeper/10.108.227.173:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2022-03-11 10:54:17,001] INFO Socket connection established, initiating session, client: /10.0.0.107:54890, server: kafka-zookeeper/10.108.227.173:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-11 10:54:17,018] INFO Session establishment complete on server kafka-zookeeper/10.108.227.173:2181, sessionid = 0x100003f7fd10000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2022-03-11 10:54:17,024] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2022-03-11 10:54:17,137] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
[2022-03-11 10:54:17,147] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener)
[2022-03-11 10:54:17,147] INFO Cleared cache (kafka.server.FinalizedFeatureCache)
[2022-03-11 10:54:17,279] INFO Cluster ID = ryRIlZJsSGuCYvQhZAZHKQ (kafka.server.KafkaServer)
[2022-03-11 10:54:17,281] WARN No meta.properties file under dir /bitnami/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2022-03-11 10:54:17,361] INFO KafkaConfig values:
    advertised.host.name = null
    advertised.listeners = INTERNAL://kafka-0.kafka-headless.default.svc.cluster.local:9093,CLIENT://kafka-0.kafka-headless.default.svc.cluster.local:9092
    advertised.port = null
    alter.config.policy.class.name = null
    alter.log.dirs.replication.quota.window.num = 11
    alter.log.dirs.replication.quota.window.size.seconds = 1
    authorizer.class.name =
    auto.create.topics.enable = true
    auto.leader.rebalance.enable = true
    background.threads = 10
    broker.heartbeat.interval.ms = 2000
    broker.id = 0
    broker.id.generation.enable = true
    broker.rack = null
    broker.session.timeout.ms = 9000
    client.quota.callback.class = null
    compression.type = producer
    connection.failed.authentication.delay.ms = 100
    connections.max.idle.ms = 600000
    connections.max.reauth.ms = 0
    control.plane.listener.name = null
    controlled.shutdown.enable = true
    controlled.shutdown.max.retries = 3
    controlled.shutdown.retry.backoff.ms = 5000
    controller.listener.names = null
    controller.quorum.append.linger.ms = 25
    controller.quorum.election.backoff.max.ms = 1000
    controller.quorum.election.timeout.ms = 1000
    controller.quorum.fetch.timeout.ms = 2000
    controller.quorum.request.timeout.ms = 2000
    controller.quorum.retry.backoff.ms = 20
    controller.quorum.voters = []
    controller.quota.window.num = 11
    controller.quota.window.size.seconds = 1
    controller.socket.timeout.ms = 30000
    create.topic.policy.class.name = null
    default.replication.factor = 1
    delegation.token.expiry.check.interval.ms = 3600000
    delegation.token.expiry.time.ms = 86400000
    delegation.token.master.key = null
    delegation.token.max.lifetime.ms = 604800000
    delegation.token.secret.key = null
    delete.records.purgatory.purge.interval.requests = 1
    delete.topic.enable = false
    fetch.max.bytes = 57671680
    fetch.purgatory.purge.interval.requests = 1000
    group.initial.rebalance.delay.ms = 0
    group.max.session.timeout.ms = 1800000
    group.max.size = 2147483647
    group.min.session.timeout.ms = 6000
    host.name =
    initial.broker.registration.timeout.ms = 60000
    inter.broker.listener.name = INTERNAL
    inter.broker.protocol.version = 2.8-IV1
    kafka.metrics.polling.interval.secs = 10
    kafka.metrics.reporters = []
    leader.imbalance.check.interval.seconds = 300
    leader.imbalance.per.broker.percentage = 10
    listener.security.protocol.map = INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT
    listeners = INTERNAL://:9093,CLIENT://:9092
    log.cleaner.backoff.ms = 15000
    log.cleaner.dedupe.buffer.size = 134217728
    log.cleaner.delete.retention.ms = 86400000
    log.cleaner.enable = true
    log.cleaner.io.buffer.load.factor = 0.9
    log.cleaner.io.buffer.size = 524288
    log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
    log.cleaner.max.compaction.lag.ms = 9223372036854775807
    log.cleaner.min.cleanable.ratio = 0.5
    log.cleaner.min.compaction.lag.ms = 0
    log.cleaner.threads = 1
    log.cleanup.policy = [delete]
    log.dir = /tmp/kafka-logs
    log.dirs = /bitnami/kafka/data
    log.flush.interval.messages = 10000
    log.flush.interval.ms = 1000
    log.flush.offset.checkpoint.interval.ms = 60000
    log.flush.scheduler.interval.ms = 9223372036854775807
    log.flush.start.offset.checkpoint.interval.ms = 60000
    log.index.interval.bytes = 4096
    log.index.size.max.bytes = 10485760
    log.message.downconversion.enable = true
    log.message.format.version = 2.8-IV1
    log.message.timestamp.difference.max.ms = 9223372036854775807
    log.message.timestamp.type = CreateTime
    log.preallocate = false
    log.retention.bytes = 1073741824
    log.retention.check.interval.ms = 300000
    log.retention.hours = 168
    log.retention.minutes = null
    log.retention.ms = null
    log.roll.hours = 168
    log.roll.jitter.hours = 0
    log.roll.jitter.ms = null
    log.roll.ms = null
    log.segment.bytes = 1073741824
    log.segment.delete.delay.ms = 60000
    max.connection.creation.rate = 2147483647
    max.connections = 2147483647
    max.connections.per.ip = 2147483647
    max.connections.per.ip.overrides =
    max.incremental.fetch.session.cache.slots = 1000
    message.max.bytes = 1000012
    metadata.log.dir = null
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    min.insync.replicas = 1
    node.id = -1
    num.io.threads = 8
    num.network.threads = 3
    num.partitions = 1
    num.recovery.threads.per.data.dir = 1
    num.replica.alter.log.dirs.threads = null
    num.replica.fetchers = 1
    offset.metadata.max.bytes = 4096
    offsets.commit.required.acks = -1
    offsets.commit.timeout.ms = 5000
    offsets.load.buffer.size = 5242880
    offsets.retention.check.interval.ms = 600000
    offsets.retention.minutes = 10080
    offsets.topic.compression.codec = 0
    offsets.topic.num.partitions = 50
    offsets.topic.replication.factor = 1
    offsets.topic.segment.bytes = 104857600
    password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
    password.encoder.iterations = 4096
    password.encoder.key.length = 128
    password.encoder.keyfactory.algorithm = null
    password.encoder.old.secret = null
    password.encoder.secret = null
    port = 9092
    principal.builder.class = null
    process.roles = []
    producer.purgatory.purge.interval.requests = 1000
    queued.max.request.bytes = -1
    queued.max.requests = 500
    quota.consumer.default = 9223372036854775807
    quota.producer.default = 9223372036854775807
    quota.window.num = 11
    quota.window.size.seconds = 1
    replica.fetch.backoff.ms = 1000
    replica.fetch.max.bytes = 1048576
    replica.fetch.min.bytes = 1
    replica.fetch.response.max.bytes = 10485760
    replica.fetch.wait.max.ms = 500
    replica.high.watermark.checkpoint.interval.ms = 5000
    replica.lag.time.max.ms = 30000
    replica.selector.class = null
    replica.socket.receive.buffer.bytes = 65536
    replica.socket.timeout.ms = 30000
    replication.quota.window.num = 11
    replication.quota.window.size.seconds = 1
    request.timeout.ms = 30000
    reserved.broker.max.id = 1000
    sasl.client.callback.handler.class = null
    sasl.enabled.mechanisms = [PLAIN, SCRAM-SHA-256, SCRAM-SHA-512]
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.principal.to.local.rules = [DEFAULT]
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism.controller.protocol = GSSAPI
    sasl.mechanism.inter.broker.protocol =
    sasl.server.callback.handler.class = null
    security.inter.broker.protocol = PLAINTEXT
    security.providers = null
    socket.connection.setup.timeout.max.ms = 30000
    socket.connection.setup.timeout.ms = 10000
    socket.receive.buffer.bytes = 102400
    socket.request.max.bytes = 104857600
    socket.send.buffer.bytes = 102400
    ssl.cipher.suites = []
    ssl.client.auth = none
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.certificate.chain = null
    ssl.keystore.key = null
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.principal.mapping.rules = DEFAULT
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.certificates = null
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
    transaction.max.timeout.ms = 900000
    transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
    transaction.state.log.load.buffer.size = 5242880
    transaction.state.log.min.isr = 1
    transaction.state.log.num.partitions = 50
    transaction.state.log.replication.factor = 1
    transaction.state.log.segment.bytes = 104857600
    transactional.id.expiration.ms = 604800000
    unclean.leader.election.enable = false
    zookeeper.clientCnxnSocket = null
    zookeeper.connect = kafka-zookeeper
    zookeeper.connection.timeout.ms = 6000
    zookeeper.max.in.flight.requests = 10
    zookeeper.session.timeout.ms = 18000
    zookeeper.set.acl = false
    zookeeper.ssl.cipher.suites = null
    zookeeper.ssl.client.enable = false
    zookeeper.ssl.crl.enable = false
    zookeeper.ssl.enabled.protocols = null
    zookeeper.ssl.endpoint.identification.algorithm = HTTPS
    zookeeper.ssl.keystore.location = null
    zookeeper.ssl.keystore.password = null
    zookeeper.ssl.keystore.type = null
    zookeeper.ssl.ocsp.enable = false
    zookeeper.ssl.protocol = TLSv1.2
    zookeeper.ssl.truststore.location = null
    zookeeper.ssl.truststore.password = null
    zookeeper.ssl.truststore.type = null
    zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2022-03-11 10:54:17,370] INFO KafkaConfig values:
    advertised.host.name = null
    advertised.listeners = INTERNAL://kafka-0.kafka-headless.default.svc.cluster.local:9093,CLIENT://kafka-0.kafka-headless.default.svc.cluster.local:9092
    advertised.port = null
    alter.config.policy.class.name = null
    alter.log.dirs.replication.quota.window.num = 11
    alter.log.dirs.replication.quota.window.size.seconds = 1
    authorizer.class.name =
    auto.create.topics.enable = true
    auto.leader.rebalance.enable = true
    background.threads = 10
    broker.heartbeat.interval.ms = 2000
    broker.id = 0
    broker.id.generation.enable = true
    broker.rack = null
    broker.session.timeout.ms = 9000
    client.quota.callback.class = null
    compression.type = producer
    connection.failed.authentication.delay.ms = 100
    connections.max.idle.ms = 600000
    connections.max.reauth.ms = 0
    control.plane.listener.name = null
    controlled.shutdown.enable = true
    controlled.shutdown.max.retries = 3
    controlled.shutdown.retry.backoff.ms = 5000
    controller.listener.names = null
    controller.quorum.append.linger.ms = 25
    controller.quorum.election.backoff.max.ms = 1000
    controller.quorum.election.timeout.ms = 1000
    controller.quorum.fetch.timeout.ms = 2000
    controller.quorum.request.timeout.ms = 2000
    controller.quorum.retry.backoff.ms = 20
    controller.quorum.voters = []
    controller.quota.window.num = 11
    controller.quota.window.size.seconds = 1
    controller.socket.timeout.ms = 30000
    create.topic.policy.class.name = null
    default.replication.factor = 1
    delegation.token.expiry.check.interval.ms = 3600000
    delegation.token.expiry.time.ms = 86400000
    delegation.token.master.key = null
    delegation.token.max.lifetime.ms = 604800000
    delegation.token.secret.key = null
    delete.records.purgatory.purge.interval.requests = 1
    delete.topic.enable = false
    fetch.max.bytes = 57671680
    fetch.purgatory.purge.interval.requests = 1000
    group.initial.rebalance.delay.ms = 0
    group.max.session.timeout.ms = 1800000
    group.max.size = 2147483647
    group.min.session.timeout.ms = 6000
    host.name =
    initial.broker.registration.timeout.ms = 60000
    inter.broker.listener.name = INTERNAL
    inter.broker.protocol.version = 2.8-IV1
    kafka.metrics.polling.interval.secs = 10
    kafka.metrics.reporters = []
    leader.imbalance.check.interval.seconds = 300
    leader.imbalance.per.broker.percentage = 10
    listener.security.protocol.map = INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT
    listeners = INTERNAL://:9093,CLIENT://:9092
    log.cleaner.backoff.ms = 15000
    log.cleaner.dedupe.buffer.size = 134217728
    log.cleaner.delete.retention.ms = 86400000
    log.cleaner.enable = true
    log.cleaner.io.buffer.load.factor = 0.9
    log.cleaner.io.buffer.size = 524288
    log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
    log.cleaner.max.compaction.lag.ms = 9223372036854775807
    log.cleaner.min.cleanable.ratio = 0.5
    log.cleaner.min.compaction.lag.ms = 0
    log.cleaner.threads = 1
    log.cleanup.policy = [delete]
    log.dir = /tmp/kafka-logs
    log.dirs = /bitnami/kafka/data
    log.flush.interval.messages = 10000
    log.flush.interval.ms = 1000
    log.flush.offset.checkpoint.interval.ms = 60000
    log.flush.scheduler.interval.ms = 9223372036854775807
    log.flush.start.offset.checkpoint.interval.ms = 60000
    log.index.interval.bytes = 4096
    log.index.size.max.bytes = 10485760
    log.message.downconversion.enable = true
    log.message.format.version = 2.8-IV1
    log.message.timestamp.difference.max.ms = 9223372036854775807
    log.message.timestamp.type = CreateTime
    log.preallocate = false
    log.retention.bytes = 1073741824
    log.retention.check.interval.ms = 300000
    log.retention.hours = 168
    log.retention.minutes = null
    log.retention.ms = null
    log.roll.hours = 168
    log.roll.jitter.hours = 0
    log.roll.jitter.ms = null
    log.roll.ms = null
    log.segment.bytes = 1073741824
    log.segment.delete.delay.ms = 60000
    max.connection.creation.rate = 2147483647
    max.connections = 2147483647
    max.connections.per.ip = 2147483647
    max.connections.per.ip.overrides =
    max.incremental.fetch.session.cache.slots = 1000
    message.max.bytes = 1000012
    metadata.log.dir = null
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    min.insync.replicas = 1
    node.id = -1
    num.io.threads = 8
    num.network.threads = 3
    num.partitions = 1
    num.recovery.threads.per.data.dir = 1
    num.replica.alter.log.dirs.threads = null
    num.replica.fetchers = 1
    offset.metadata.max.bytes = 4096
    offsets.commit.required.acks = -1
    offsets.commit.timeout.ms = 5000
    offsets.load.buffer.size = 5242880
    offsets.retention.check.interval.ms = 600000
    offsets.retention.minutes = 10080
    offsets.topic.compression.codec = 0
    offsets.topic.num.partitions = 50
    offsets.topic.replication.factor = 1
    offsets.topic.segment.bytes = 104857600
    password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
    password.encoder.iterations = 4096
    password.encoder.key.length = 128
    password.encoder.keyfactory.algorithm = null
    password.encoder.old.secret = null
    password.encoder.secret = null
    port = 9092
    principal.builder.class = null
    process.roles = []
    producer.purgatory.purge.interval.requests = 1000
    queued.max.request.bytes = -1
    queued.max.requests = 500
    quota.consumer.default = 9223372036854775807
    quota.producer.default = 9223372036854775807
    quota.window.num = 11
    quota.window.size.seconds = 1
    replica.fetch.backoff.ms = 1000
    replica.fetch.max.bytes = 1048576
    replica.fetch.min.bytes = 1
    replica.fetch.response.max.bytes = 10485760
    replica.fetch.wait.max.ms = 500
    replica.high.watermark.checkpoint.interval.ms = 5000
    replica.lag.time.max.ms = 30000
    replica.selector.class = null
    replica.socket.receive.buffer.bytes = 65536
    replica.socket.timeout.ms = 30000
    replication.quota.window.num = 11
    replication.quota.window.size.seconds = 1
    request.timeout.ms = 30000
    reserved.broker.max.id = 1000
    sasl.client.callback.handler.class = null
    sasl.enabled.mechanisms = [PLAIN, SCRAM-SHA-256, SCRAM-SHA-512]
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.principal.to.local.rules = [DEFAULT]
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism.controller.protocol = GSSAPI
    sasl.mechanism.inter.broker.protocol =
    sasl.server.callback.handler.class = null
    security.inter.broker.protocol = PLAINTEXT
    security.providers = null
    socket.connection.setup.timeout.max.ms = 30000
    socket.connection.setup.timeout.ms = 10000
    socket.receive.buffer.bytes = 102400
    socket.request.max.bytes = 104857600
    socket.send.buffer.bytes = 102400
    ssl.cipher.suites = []
    ssl.client.auth = none
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.certificate.chain = null
    ssl.keystore.key = null
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.principal.mapping.rules = DEFAULT
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.certificates = null
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
    transaction.max.timeout.ms = 900000
    transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
    transaction.state.log.load.buffer.size = 5242880
    transaction.state.log.min.isr = 1
    transaction.state.log.num.partitions = 50
    transaction.state.log.replication.factor = 1
    transaction.state.log.segment.bytes = 104857600
    transactional.id.expiration.ms = 604800000
    unclean.leader.election.enable = false
    zookeeper.clientCnxnSocket = null
    zookeeper.connect = kafka-zookeeper
    zookeeper.connection.timeout.ms = 6000
    zookeeper.max.in.flight.requests = 10
    zookeeper.session.timeout.ms = 18000
    zookeeper.set.acl = false
    zookeeper.ssl.cipher.suites = null
    zookeeper.ssl.client.enable = false
    zookeeper.ssl.crl.enable = false
    zookeeper.ssl.enabled.protocols = null
    zookeeper.ssl.endpoint.identification.algorithm = HTTPS
    zookeeper.ssl.keystore.location = null
    zookeeper.ssl.keystore.password = null
    zookeeper.ssl.keystore.type = null
    zookeeper.ssl.ocsp.enable = false
    zookeeper.ssl.protocol = TLSv1.2
    zookeeper.ssl.truststore.location = null
    zookeeper.ssl.truststore.password = null
    zookeeper.ssl.truststore.type = null
    zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2022-03-11 10:54:17,406] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2022-03-11 10:54:17,407] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2022-03-11 10:54:17,409] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2022-03-11 10:54:17,410] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2022-03-11 10:54:17,446] INFO Loading logs from log dirs ArrayBuffer(/bitnami/kafka/data) (kafka.log.LogManager)
[2022-03-11 10:54:17,449] INFO Attempting recovery for all logs in /bitnami/kafka/data since no clean shutdown file was found (kafka.log.LogManager)
[2022-03-11 10:54:17,461] INFO Loaded 0 logs in 15ms. (kafka.log.LogManager)
[2022-03-11 10:54:17,462] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2022-03-11 10:54:17,464] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2022-03-11 10:54:17,474] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2022-03-11 10:54:17,653] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
[2022-03-11 10:54:18,117] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2022-03-11 10:54:18,123] INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.Acceptor)
[2022-03-11 10:54:18,181] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(INTERNAL) (kafka.network.SocketServer)
[2022-03-11 10:54:18,182] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2022-03-11 10:54:18,182] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2022-03-11 10:54:18,190] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(CLIENT) (kafka.network.SocketServer)
[2022-03-11 10:54:18,217] INFO [broker-0-to-controller-send-thread]: Starting (kafka.server.BrokerToControllerRequestThread)
[2022-03-11 10:54:18,236] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-03-11 10:54:18,238] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-03-11 10:54:18,241] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-03-11 10:54:18,244] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-03-11 10:54:18,258] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2022-03-11 10:54:18,278] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2022-03-11 10:54:18,300] INFO Stat of the created znode at /brokers/ids/0 is: 25,25,1646996058293,1646996058293,1,0,0,72057866765271040,364,0,25
 (kafka.zk.KafkaZkClient)
[2022-03-11 10:54:18,301] INFO Registered broker 0 at path /brokers/ids/0 with addresses: INTERNAL://kafka-0.kafka-headless.default.svc.cluster.local:9093,CLIENT://kafka-0.kafka-headless.default.svc.cluster.local:9092, czxid (broker epoch): 25 (kafka.zk.KafkaZkClient)
[2022-03-11 10:54:18,345] INFO [ControllerEventThread controllerId=0] Starting (kafka.controller.ControllerEventManager$ControllerEventThread)
[2022-03-11 10:54:18,355] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-03-11 10:54:18,362] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-03-11 10:54:18,363] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-03-11 10:54:18,365] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2022-03-11 10:54:18,373] INFO [Controller id=0] 0 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController)
[2022-03-11 10:54:18,376] INFO [Controller id=0] Creating FeatureZNode at path: /feature with contents: FeatureZNode(Enabled,Features{}) (kafka.controller.KafkaController)
[2022-03-11 10:54:18,379] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener)
[2022-03-11 10:54:18,383] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2022-03-11 10:54:18,391] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2022-03-11 10:54:18,403] INFO Updated cache from existing <empty> to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache)
[2022-03-11 10:54:18,403] INFO [Controller id=0] Registering handlers (kafka.controller.KafkaController)
[2022-03-11 10:54:18,407] INFO [Controller id=0] Deleting log dir event notifications (kafka.controller.KafkaController)
[2022-03-11 10:54:18,410] INFO [Controller id=0] Deleting isr change notifications (kafka.controller.KafkaController)
[2022-03-11 10:54:18,413] INFO [Controller id=0] Initializing controller context (kafka.controller.KafkaController)
[2022-03-11 10:54:18,425] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2022-03-11 10:54:18,426] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2022-03-11 10:54:18,430] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2022-03-11 10:54:18,433] INFO [Controller id=0] Initialized broker epochs cache: Map(0 -> 25) (kafka.controller.KafkaController)
[2022-03-11 10:54:18,438] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2022-03-11 10:54:18,441] DEBUG [Controller id=0] Register BrokerModifications handler for Set(0) (kafka.controller.KafkaController)
[2022-03-11 10:54:18,453] DEBUG [Channel manager on controller 0]: Controller 0 trying to connect to broker 0 (kafka.controller.ControllerChannelManager)
[2022-03-11 10:54:18,469] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-03-11 10:54:18,470] INFO [Controller id=0] Currently active brokers in the cluster: Set(0) (kafka.controller.KafkaController)
[2022-03-11 10:54:18,471] INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread)
[2022-03-11 10:54:18,472] INFO [Controller id=0] Currently shutting brokers in the cluster: Set() (kafka.controller.KafkaController)
[2022-03-11 10:54:18,476] INFO [Controller id=0] Current list of topics in the cluster: Set() (kafka.controller.KafkaController)
[2022-03-11 10:54:18,477] INFO [Controller id=0] Fetching topic deletions in progress (kafka.controller.KafkaController)
[2022-03-11 10:54:18,482] INFO [Controller id=0] List of topics to be deleted:  (kafka.controller.KafkaController)
[2022-03-11 10:54:18,483] INFO [Controller id=0] List of topics ineligible for deletion:  (kafka.controller.KafkaController)
[2022-03-11 10:54:18,483] INFO [Controller id=0] Initializing topic deletion manager (kafka.controller.KafkaController)
[2022-03-11 10:54:18,484] INFO [Topic Deletion Manager 0] Initializing manager with initial deletions: Set(), initial ineligible deletions: Set() (kafka.controller.TopicDeletionManager)
[2022-03-11 10:54:18,484] INFO [Topic Deletion Manager 0] Removing Set() since delete topic is disabled (kafka.controller.TopicDeletionManager)
[2022-03-11 10:54:18,485] INFO [Controller id=0] Sending update metadata request (kafka.controller.KafkaController)
[2022-03-11 10:54:18,489] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers Set(0) for 0 partitions (state.change.logger)
[2022-03-11 10:54:18,490] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2022-03-11 10:54:18,503] INFO [ReplicaStateMachine controllerId=0] Initializing replica state (kafka.controller.ZkReplicaStateMachine)
[2022-03-11 10:54:18,503] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Starting socket server acceptors and processors (kafka.network.SocketServer)
[2022-03-11 10:54:18,503] INFO [ReplicaStateMachine controllerId=0] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine)
[2022-03-11 10:54:18,507] INFO [ReplicaStateMachine controllerId=0] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine)
[2022-03-11 10:54:18,507] DEBUG [ReplicaStateMachine controllerId=0] Started replica state machine with initial state -> Map() (kafka.controller.ZkReplicaStateMachine)
[2022-03-11 10:54:18,508] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(INTERNAL) (kafka.network.SocketServer)
[2022-03-11 10:54:18,510] INFO [PartitionStateMachine controllerId=0] Initializing partition state (kafka.controller.ZkPartitionStateMachine)
[2022-03-11 10:54:18,512] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(CLIENT) (kafka.network.SocketServer)
[2022-03-11 10:54:18,513] INFO [RequestSendThread controllerId=0] Controller 0 connected to kafka-0.kafka-headless.default.svc.cluster.local:9093 (id: 0 rack: null) for sending state change requests (kafka.controller.RequestSendThread)
[2022-03-11 10:54:18,515] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started socket server acceptors and processors (kafka.network.SocketServer)
[2022-03-11 10:54:18,528] INFO [PartitionStateMachine controllerId=0] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine)
[2022-03-11 10:54:18,532] INFO Kafka version: 2.8.1 (org.apache.kafka.common.utils.AppInfoParser)
[2022-03-11 10:54:18,533] INFO Kafka commitId: 839b886f9b732b15 (org.apache.kafka.common.utils.AppInfoParser)
[2022-03-11 10:54:18,533] INFO Kafka startTimeMs: 1646996058515 (org.apache.kafka.common.utils.AppInfoParser)
[2022-03-11 10:54:18,534] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
[2022-03-11 10:54:18,539] DEBUG [PartitionStateMachine controllerId=0] Started partition state machine with initial state -> Map() (kafka.controller.ZkPartitionStateMachine)
[2022-03-11 10:54:18,540] INFO [Controller id=0] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController)
[2022-03-11 10:54:18,551] INFO [Controller id=0] Partitions undergoing preferred replica election:  (kafka.controller.KafkaController)
[2022-03-11 10:54:18,552] INFO [Controller id=0] Partitions that completed preferred replica election:  (kafka.controller.KafkaController)
[2022-03-11 10:54:18,552] INFO [Controller id=0] Skipping preferred replica election for partitions due to topic deletion:  (kafka.controller.KafkaController)
[2022-03-11 10:54:18,553] INFO [Controller id=0] Resuming preferred replica election for partitions:  (kafka.controller.KafkaController)
[2022-03-11 10:54:18,554] INFO [Controller id=0] Starting replica leader election (PREFERRED) for partitions  triggered by ZkTriggered (kafka.controller.KafkaController)
[2022-03-11 10:54:18,572] INFO [Controller id=0] Starting the controller scheduler (kafka.controller.KafkaController)
[2022-03-11 10:54:18,654] INFO [broker-0-to-controller-send-thread]: Recorded new controller, from now on will use broker kafka-0.kafka-headless.default.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2022-03-11 10:54:23,573] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2022-03-11 10:54:23,574] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
github-actions[bot] commented 2 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 2 years ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.