wurstmeister / kafka-docker

Dockerfile for Apache Kafka
http://wurstmeister.github.io/kafka-docker/
Apache License 2.0
6.94k stars 2.73k forks source link

Starting multiple Kafka containers in docker-compose simultaneously loops in "waiting for kafka to be ready" #567

Closed raginjason closed 4 years ago

raginjason commented 4 years ago

When trying to execute docker-compose up --no-recreate -d --scale kafka=3 it seems that the Kafka containers are stuck printing "waiting for kafka to be ready" to the log. There's a lot going on in the log, but I did notice something that may be a hint:

kafka_3      | 2020-01-29T00:06:13.117511100Z java.lang.IllegalStateException: Epoch 62 larger than current broker epoch 61
kafka_3      | 2020-01-29T00:06:13.119874000Z java.lang.IllegalStateException: Epoch 63 larger than current broker epoch 61

I suspect this is a race condition, but I'm really not sure. Here is my docker-compose.yml and logfiles:

docker-compose.yml:

version: "3.3"
services:

  zookeeper:
    image: wurstmeister/zookeeper:latest

  kafka:
    image: wurstmeister/kafka:2.12-2.4.0
    depends_on:
      - zookeeper
    environment:
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

      KAFKA_LISTENERS: INSIDE://0.0.0.0:9093
      KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT

      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE

      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
      KAFKA_CREATE_TOPICS: "Quotes:1:1,Options:1:1,Futures:1:1,FuturesOptions:1:1,CollectSymbols:1:1"

Docker logfile

kafka_2      | 2020-01-29T00:06:09.468016200Z   log.cleaner.max.compaction.lag.ms = 9223372036854775807
kafka_2      | 2020-01-29T00:06:09.468033100Z   log.cleaner.min.cleanable.ratio = 0.5
kafka_2      | 2020-01-29T00:06:09.468053800Z   log.cleaner.min.compaction.lag.ms = 0
kafka_2      | 2020-01-29T00:06:09.468074400Z   log.cleaner.threads = 1
kafka_2      | 2020-01-29T00:06:09.468251400Z   log.cleanup.policy = [delete]
kafka_1      | 2020-01-29T00:06:08.118209900Z [2020-01-29 00:06:08,117] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka_1      | 2020-01-29T00:06:08.118245900Z [2020-01-29 00:06:08,117] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka_1      | 2020-01-29T00:06:08.118271800Z [2020-01-29 00:06:08,117] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka_1      | 2020-01-29T00:06:08.118602900Z [2020-01-29 00:06:08,118] INFO Client environment:os.version=4.19.76-linuxkit (org.apache.zookeeper.ZooKeeper)
kafka_1      | 2020-01-29T00:06:08.118951000Z [2020-01-29 00:06:08,118] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka_1      | 2020-01-29T00:06:08.119300400Z [2020-01-29 00:06:08,118] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka_1      | 2020-01-29T00:06:08.119836400Z [2020-01-29 00:06:08,119] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
kafka_1      | 2020-01-29T00:06:08.120344200Z [2020-01-29 00:06:08,119] INFO Client environment:os.memory.free=979MB (org.apache.zookeeper.ZooKeeper)
kafka_1      | 2020-01-29T00:06:08.120760900Z [2020-01-29 00:06:08,120] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
kafka_1      | 2020-01-29T00:06:08.121062500Z [2020-01-29 00:06:08,120] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
kafka_1      | 2020-01-29T00:06:08.135323900Z [2020-01-29 00:06:08,134] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@7ed7259e (org.apache.zookeeper.ZooKeeper)
kafka_1      | 2020-01-29T00:06:08.162031600Z [2020-01-29 00:06:08,161] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
kafka_1      | 2020-01-29T00:06:08.180803100Z [2020-01-29 00:06:08,180] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
kafka_1      | 2020-01-29T00:06:08.197882300Z [2020-01-29 00:06:08,197] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
kafka_1      | 2020-01-29T00:06:08.204311100Z [2020-01-29 00:06:08,204] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
kafka_1      | 2020-01-29T00:06:08.231701100Z [2020-01-29 00:06:08,230] INFO Opening socket connection to server zookeeper/192.168.80.2:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
kafka_1      | 2020-01-29T00:06:08.252229200Z [2020-01-29 00:06:08,251] INFO Socket connection established, initiating session, client: /192.168.80.4:44358, server: zookeeper/192.168.80.2:2181 (org.apache.zookeeper.ClientCnxn)
kafka_1      | 2020-01-29T00:06:08.312153000Z [2020-01-29 00:06:08,311] INFO Session establishment complete on server zookeeper/192.168.80.2:2181, sessionid = 0x1000212ddb90000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka_1      | 2020-01-29T00:06:08.323559700Z [2020-01-29 00:06:08,323] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
kafka_1      | 2020-01-29T00:06:09.237490300Z [2020-01-29 00:06:09,237] INFO Cluster ID = LsbkrrwdQc2jPfF0WjxCgw (kafka.server.KafkaServer)
kafka_1      | 2020-01-29T00:06:09.245884900Z [2020-01-29 00:06:09,245] WARN No meta.properties file under dir /kafka/kafka-logs-792b391a7b33/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka_1      | 2020-01-29T00:06:09.415514800Z [2020-01-29 00:06:09,414] INFO KafkaConfig values:
kafka_1      | 2020-01-29T00:06:09.415620600Z   advertised.host.name = null
kafka_1      | 2020-01-29T00:06:09.415640100Z   advertised.listeners = INSIDE://kafka:9093
kafka_1      | 2020-01-29T00:06:09.415661300Z   advertised.port = null
kafka_1      | 2020-01-29T00:06:09.415682000Z   alter.config.policy.class.name = null
kafka_1      | 2020-01-29T00:06:09.415702700Z   alter.log.dirs.replication.quota.window.num = 11
kafka_1      | 2020-01-29T00:06:09.415723400Z   alter.log.dirs.replication.quota.window.size.seconds = 1
kafka_1      | 2020-01-29T00:06:09.415740100Z   authorizer.class.name =
kafka_1      | 2020-01-29T00:06:09.415761500Z   auto.create.topics.enable = false
kafka_1      | 2020-01-29T00:06:09.415782100Z   auto.leader.rebalance.enable = true
kafka_1      | 2020-01-29T00:06:09.415803300Z   background.threads = 10
kafka_1      | 2020-01-29T00:06:09.415819100Z   broker.id = -1
kafka_1      | 2020-01-29T00:06:09.415839800Z   broker.id.generation.enable = true
kafka_1      | 2020-01-29T00:06:09.415856100Z   broker.rack = null
kafka_1      | 2020-01-29T00:06:09.415876800Z   client.quota.callback.class = null
kafka_1      | 2020-01-29T00:06:09.415897500Z   compression.type = producer
kafka_1      | 2020-01-29T00:06:09.415918200Z   connection.failed.authentication.delay.ms = 100
kafka_1      | 2020-01-29T00:06:09.415938800Z   connections.max.idle.ms = 600000
kafka_1      | 2020-01-29T00:06:09.416078400Z   connections.max.reauth.ms = 0
kafka_1      | 2020-01-29T00:06:09.416101200Z   control.plane.listener.name = null
kafka_1      | 2020-01-29T00:06:09.416118600Z   controlled.shutdown.enable = true
kafka_1      | 2020-01-29T00:06:09.416134300Z   controlled.shutdown.max.retries = 3
kafka_1      | 2020-01-29T00:06:09.416149000Z   controlled.shutdown.retry.backoff.ms = 5000
kafka_1      | 2020-01-29T00:06:09.416168500Z   controller.socket.timeout.ms = 30000
kafka_1      | 2020-01-29T00:06:09.416189700Z   create.topic.policy.class.name = null
kafka_1      | 2020-01-29T00:06:09.416204700Z   default.replication.factor = 1
kafka_1      | 2020-01-29T00:06:09.416226100Z   delegation.token.expiry.check.interval.ms = 3600000
kafka_1      | 2020-01-29T00:06:09.416245700Z   delegation.token.expiry.time.ms = 86400000
kafka_1      | 2020-01-29T00:06:09.416263600Z   delegation.token.master.key = null
kafka_1      | 2020-01-29T00:06:09.416285100Z   delegation.token.max.lifetime.ms = 604800000
kafka_1      | 2020-01-29T00:06:09.416301700Z   delete.records.purgatory.purge.interval.requests = 1
kafka_1      | 2020-01-29T00:06:09.416333100Z   delete.topic.enable = true
kafka_1      | 2020-01-29T00:06:09.416356500Z   fetch.purgatory.purge.interval.requests = 1000
kafka_1      | 2020-01-29T00:06:09.416377300Z   group.initial.rebalance.delay.ms = 0
kafka_1      | 2020-01-29T00:06:09.416398000Z   group.max.session.timeout.ms = 1800000
kafka_1      | 2020-01-29T00:06:09.416414100Z   group.max.size = 2147483647
kafka_1      | 2020-01-29T00:06:09.416435700Z   group.min.session.timeout.ms = 6000
kafka_1      | 2020-01-29T00:06:09.416456400Z   host.name =
kafka_1      | 2020-01-29T00:06:09.416472400Z   inter.broker.listener.name = INSIDE
kafka_1      | 2020-01-29T00:06:09.416493200Z   inter.broker.protocol.version = 2.4-IV1
kafka_1      | 2020-01-29T00:06:09.416514000Z   kafka.metrics.polling.interval.secs = 10
kafka_1      | 2020-01-29T00:06:09.416534700Z   kafka.metrics.reporters = []
kafka_1      | 2020-01-29T00:06:09.416864600Z   leader.imbalance.check.interval.seconds = 300
kafka_1      | 2020-01-29T00:06:09.416880600Z   leader.imbalance.per.broker.percentage = 10
kafka_1      | 2020-01-29T00:06:09.416897400Z   listener.security.protocol.map = INSIDE:PLAINTEXT
kafka_1      | 2020-01-29T00:06:09.417014100Z   listeners = INSIDE://0.0.0.0:9093
kafka_1      | 2020-01-29T00:06:09.417035200Z   log.cleaner.backoff.ms = 15000
kafka_1      | 2020-01-29T00:06:09.417056000Z   log.cleaner.dedupe.buffer.size = 134217728
kafka_1      | 2020-01-29T00:06:09.417142600Z   log.cleaner.delete.retention.ms = 86400000
kafka_1      | 2020-01-29T00:06:09.417164800Z   log.cleaner.enable = true
kafka_1      | 2020-01-29T00:06:09.417185600Z   log.cleaner.io.buffer.load.factor = 0.9
kafka_1      | 2020-01-29T00:06:09.417206300Z   log.cleaner.io.buffer.size = 524288
kafka_1      | 2020-01-29T00:06:09.417307900Z   log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka_1      | 2020-01-29T00:06:09.417500600Z   log.cleaner.max.compaction.lag.ms = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.417522200Z   log.cleaner.min.cleanable.ratio = 0.5
kafka_1      | 2020-01-29T00:06:09.417545300Z   log.cleaner.min.compaction.lag.ms = 0
kafka_1      | 2020-01-29T00:06:09.417669300Z   log.cleaner.threads = 1
kafka_1      | 2020-01-29T00:06:09.417684800Z   log.cleanup.policy = [delete]
kafka_1      | 2020-01-29T00:06:09.417699900Z   log.dir = /tmp/kafka-logs
kafka_1      | 2020-01-29T00:06:09.417721100Z   log.dirs = /kafka/kafka-logs-792b391a7b33
kafka_1      | 2020-01-29T00:06:09.417811400Z   log.flush.interval.messages = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.417833100Z   log.flush.interval.ms = null
kafka_1      | 2020-01-29T00:06:09.417854600Z   log.flush.offset.checkpoint.interval.ms = 60000
kafka_1      | 2020-01-29T00:06:09.417876100Z   log.flush.scheduler.interval.ms = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.417962000Z   log.flush.start.offset.checkpoint.interval.ms = 60000
kafka_1      | 2020-01-29T00:06:09.417983000Z   log.index.interval.bytes = 4096
kafka_1      | 2020-01-29T00:06:09.418001500Z   log.index.size.max.bytes = 10485760
kafka_1      | 2020-01-29T00:06:09.418016800Z   log.message.downconversion.enable = true
kafka_1      | 2020-01-29T00:06:09.418144900Z   log.message.format.version = 2.4-IV1
kafka_1      | 2020-01-29T00:06:09.418166500Z   log.message.timestamp.difference.max.ms = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.418184700Z   log.message.timestamp.type = CreateTime
kafka_1      | 2020-01-29T00:06:09.418273800Z   log.preallocate = false
kafka_1      | 2020-01-29T00:06:09.418289300Z   log.retention.bytes = -1
kafka_1      | 2020-01-29T00:06:09.418304600Z   log.retention.check.interval.ms = 300000
kafka_1      | 2020-01-29T00:06:09.418321700Z   log.retention.hours = 168
kafka_1      | 2020-01-29T00:06:09.418336600Z   log.retention.minutes = null
kafka_1      | 2020-01-29T00:06:09.418429200Z   log.retention.ms = null
kafka_1      | 2020-01-29T00:06:09.418445000Z   log.roll.hours = 168
kafka_1      | 2020-01-29T00:06:09.418461000Z   log.roll.jitter.hours = 0
kafka_1      | 2020-01-29T00:06:09.418475600Z   log.roll.jitter.ms = null
kafka_1      | 2020-01-29T00:06:09.418498000Z   log.roll.ms = null
kafka_1      | 2020-01-29T00:06:09.418567600Z   log.segment.bytes = 1073741824
kafka_1      | 2020-01-29T00:06:09.418672600Z   log.segment.delete.delay.ms = 60000
kafka_1      | 2020-01-29T00:06:09.418688800Z   max.connections = 2147483647
kafka_1      | 2020-01-29T00:06:09.418705200Z   max.connections.per.ip = 2147483647
kafka_1      | 2020-01-29T00:06:09.418726600Z   max.connections.per.ip.overrides =
kafka_1      | 2020-01-29T00:06:09.418808000Z   max.incremental.fetch.session.cache.slots = 1000
kafka_1      | 2020-01-29T00:06:09.418831500Z   message.max.bytes = 1000012
kafka_1      | 2020-01-29T00:06:09.418852400Z   metric.reporters = []
kafka_1      | 2020-01-29T00:06:09.418873000Z   metrics.num.samples = 2
kafka_3      | 2020-01-29T00:06:09.390070100Z   offsets.topic.num.partitions = 50
kafka_3      | 2020-01-29T00:06:09.390094000Z   offsets.topic.replication.factor = 1
kafka_3      | 2020-01-29T00:06:09.390109000Z   offsets.topic.segment.bytes = 104857600
kafka_3      | 2020-01-29T00:06:09.390133600Z   password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
kafka_3      | 2020-01-29T00:06:09.390155300Z   password.encoder.iterations = 4096
kafka_3      | 2020-01-29T00:06:09.390177000Z   password.encoder.key.length = 128
kafka_3      | 2020-01-29T00:06:09.390198600Z   password.encoder.keyfactory.algorithm = null
kafka_3      | 2020-01-29T00:06:09.390219300Z   password.encoder.old.secret = null
kafka_3      | 2020-01-29T00:06:09.390237000Z   password.encoder.secret = null
kafka_3      | 2020-01-29T00:06:09.390260300Z   port = 9092
kafka_3      | 2020-01-29T00:06:09.390287500Z   principal.builder.class = null
kafka_3      | 2020-01-29T00:06:09.390315400Z   producer.purgatory.purge.interval.requests = 1000
kafka_3      | 2020-01-29T00:06:09.390340600Z   queued.max.request.bytes = -1
kafka_3      | 2020-01-29T00:06:09.390360800Z   queued.max.requests = 500
kafka_3      | 2020-01-29T00:06:09.390387500Z   quota.consumer.default = 9223372036854775807
kafka_3      | 2020-01-29T00:06:09.390417300Z   quota.producer.default = 9223372036854775807
kafka_3      | 2020-01-29T00:06:09.390450400Z   quota.window.num = 11
kafka_3      | 2020-01-29T00:06:09.390477200Z   quota.window.size.seconds = 1
kafka_3      | 2020-01-29T00:06:09.390505300Z   replica.fetch.backoff.ms = 1000
kafka_3      | 2020-01-29T00:06:09.390529500Z   replica.fetch.max.bytes = 1048576
kafka_3      | 2020-01-29T00:06:09.390556000Z   replica.fetch.min.bytes = 1
kafka_3      | 2020-01-29T00:06:09.390628400Z   replica.fetch.response.max.bytes = 10485760
kafka_3      | 2020-01-29T00:06:09.390644200Z   replica.fetch.wait.max.ms = 500
kafka_3      | 2020-01-29T00:06:09.390666600Z   replica.high.watermark.checkpoint.interval.ms = 5000
kafka_3      | 2020-01-29T00:06:09.390687600Z   replica.lag.time.max.ms = 10000
kafka_3      | 2020-01-29T00:06:09.390703800Z   replica.selector.class = null
kafka_3      | 2020-01-29T00:06:09.390725300Z   replica.socket.receive.buffer.bytes = 65536
kafka_3      | 2020-01-29T00:06:09.390749000Z   replica.socket.timeout.ms = 30000
kafka_3      | 2020-01-29T00:06:09.390782000Z   replication.quota.window.num = 11
kafka_3      | 2020-01-29T00:06:09.390807100Z   replication.quota.window.size.seconds = 1
kafka_3      | 2020-01-29T00:06:09.390843100Z   request.timeout.ms = 30000
kafka_3      | 2020-01-29T00:06:09.390869500Z   reserved.broker.max.id = 1000
kafka_3      | 2020-01-29T00:06:09.390888300Z   sasl.client.callback.handler.class = null
kafka_3      | 2020-01-29T00:06:09.390944800Z   sasl.enabled.mechanisms = [GSSAPI]
kafka_3      | 2020-01-29T00:06:09.390970000Z   sasl.jaas.config = null
kafka_3      | 2020-01-29T00:06:09.391001100Z   sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka_3      | 2020-01-29T00:06:09.391110300Z   sasl.kerberos.min.time.before.relogin = 60000
kafka_3      | 2020-01-29T00:06:09.391137300Z   sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka_3      | 2020-01-29T00:06:09.391173700Z   sasl.kerberos.service.name = null
kafka_3      | 2020-01-29T00:06:09.391195000Z   sasl.kerberos.ticket.renew.jitter = 0.05
kafka_3      | 2020-01-29T00:06:09.391229700Z   sasl.kerberos.ticket.renew.window.factor = 0.8
kafka_3      | 2020-01-29T00:06:09.391252100Z   sasl.login.callback.handler.class = null
kafka_3      | 2020-01-29T00:06:09.391267500Z   sasl.login.class = null
kafka_3      | 2020-01-29T00:06:09.391290400Z   sasl.login.refresh.buffer.seconds = 300
kafka_3      | 2020-01-29T00:06:09.391306100Z   sasl.login.refresh.min.period.seconds = 60
kafka_3      | 2020-01-29T00:06:09.391325700Z   sasl.login.refresh.window.factor = 0.8
kafka_3      | 2020-01-29T00:06:09.391346000Z   sasl.login.refresh.window.jitter = 0.05
kafka_3      | 2020-01-29T00:06:09.391384300Z   sasl.mechanism.inter.broker.protocol = GSSAPI
kafka_3      | 2020-01-29T00:06:09.391408800Z   sasl.server.callback.handler.class = null
kafka_3      | 2020-01-29T00:06:09.391436500Z   security.inter.broker.protocol = PLAINTEXT
kafka_3      | 2020-01-29T00:06:09.391461900Z   security.providers = null
kafka_3      | 2020-01-29T00:06:09.391488500Z   socket.receive.buffer.bytes = 102400
kafka_3      | 2020-01-29T00:06:09.391518100Z   socket.request.max.bytes = 104857600
kafka_3      | 2020-01-29T00:06:09.391548100Z   socket.send.buffer.bytes = 102400
kafka_3      | 2020-01-29T00:06:09.391569400Z   ssl.cipher.suites = []
kafka_3      | 2020-01-29T00:06:09.391659900Z   ssl.client.auth = none
kafka_3      | 2020-01-29T00:06:09.391688200Z   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka_3      | 2020-01-29T00:06:09.391714900Z   ssl.endpoint.identification.algorithm = https
kafka_3      | 2020-01-29T00:06:09.391742500Z   ssl.key.password = null
kafka_3      | 2020-01-29T00:06:09.391761500Z   ssl.keymanager.algorithm = SunX509
kafka_3      | 2020-01-29T00:06:09.391791600Z   ssl.keystore.location = null
kafka_3      | 2020-01-29T00:06:09.391814700Z   ssl.keystore.password = null
kafka_3      | 2020-01-29T00:06:09.391840500Z   ssl.keystore.type = JKS
kafka_3      | 2020-01-29T00:06:09.391859700Z   ssl.principal.mapping.rules = DEFAULT
kafka_3      | 2020-01-29T00:06:09.391884300Z   ssl.protocol = TLS
kafka_3      | 2020-01-29T00:06:09.391908800Z   ssl.provider = null
kafka_3      | 2020-01-29T00:06:09.391938000Z   ssl.secure.random.implementation = null
kafka_3      | 2020-01-29T00:06:09.391965800Z   ssl.trustmanager.algorithm = PKIX
kafka_3      | 2020-01-29T00:06:09.391994400Z   ssl.truststore.location = null
kafka_3      | 2020-01-29T00:06:09.392016100Z   ssl.truststore.password = null
kafka_3      | 2020-01-29T00:06:09.392042900Z   ssl.truststore.type = JKS
kafka_3      | 2020-01-29T00:06:09.392070000Z   transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
kafka_3      | 2020-01-29T00:06:09.392098800Z   transaction.max.timeout.ms = 900000
kafka_3      | 2020-01-29T00:06:09.392126800Z   transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka_3      | 2020-01-29T00:06:09.392148600Z   transaction.state.log.load.buffer.size = 5242880
kafka_3      | 2020-01-29T00:06:09.392180200Z   transaction.state.log.min.isr = 1
kafka_3      | 2020-01-29T00:06:09.392207900Z   transaction.state.log.num.partitions = 50
kafka_3      | 2020-01-29T00:06:09.392235000Z   transaction.state.log.replication.factor = 1
kafka_3      | 2020-01-29T00:06:09.392270700Z   transaction.state.log.segment.bytes = 104857600
kafka_3      | 2020-01-29T00:06:09.392296900Z   transactional.id.expiration.ms = 604800000
kafka_3      | 2020-01-29T00:06:09.392321800Z   unclean.leader.election.enable = false
kafka_3      | 2020-01-29T00:06:09.392348500Z   zookeeper.connect = zookeeper:2181
kafka_3      | 2020-01-29T00:06:09.392374900Z   zookeeper.connection.timeout.ms = 6000
kafka_3      | 2020-01-29T00:06:09.392401400Z   zookeeper.max.in.flight.requests = 10
kafka_3      | 2020-01-29T00:06:09.392482500Z   zookeeper.session.timeout.ms = 6000
kafka_2      | 2020-01-29T00:06:09.468281200Z   log.dir = /tmp/kafka-logs
kafka_2      | 2020-01-29T00:06:09.468302100Z   log.dirs = /kafka/kafka-logs-ef86f02f5d49
kafka_2      | 2020-01-29T00:06:09.468319300Z   log.flush.interval.messages = 9223372036854775807
kafka_2      | 2020-01-29T00:06:09.468347400Z   log.flush.interval.ms = null
kafka_2      | 2020-01-29T00:06:09.468368400Z   log.flush.offset.checkpoint.interval.ms = 60000
kafka_2      | 2020-01-29T00:06:09.468390600Z   log.flush.scheduler.interval.ms = 9223372036854775807
kafka_2      | 2020-01-29T00:06:09.468411300Z   log.flush.start.offset.checkpoint.interval.ms = 60000
kafka_2      | 2020-01-29T00:06:09.468432000Z   log.index.interval.bytes = 4096
kafka_2      | 2020-01-29T00:06:09.468452700Z   log.index.size.max.bytes = 10485760
kafka_2      | 2020-01-29T00:06:09.468470300Z   log.message.downconversion.enable = true
kafka_2      | 2020-01-29T00:06:09.468559000Z   log.message.format.version = 2.4-IV1
kafka_2      | 2020-01-29T00:06:09.468587100Z   log.message.timestamp.difference.max.ms = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.418967400Z   metrics.recording.level = INFO
kafka_1      | 2020-01-29T00:06:09.418989700Z   metrics.sample.window.ms = 30000
kafka_1      | 2020-01-29T00:06:09.419011200Z   min.insync.replicas = 1
kafka_1      | 2020-01-29T00:06:09.419107000Z   num.io.threads = 8
kafka_1      | 2020-01-29T00:06:09.419143200Z   num.network.threads = 3
kafka_1      | 2020-01-29T00:06:09.419164000Z   num.partitions = 1
kafka_1      | 2020-01-29T00:06:09.419254900Z   num.recovery.threads.per.data.dir = 1
kafka_1      | 2020-01-29T00:06:09.419273900Z   num.replica.alter.log.dirs.threads = null
kafka_1      | 2020-01-29T00:06:09.419295400Z   num.replica.fetchers = 1
kafka_1      | 2020-01-29T00:06:09.419313700Z   offset.metadata.max.bytes = 4096
kafka_1      | 2020-01-29T00:06:09.419399000Z   offsets.commit.required.acks = -1
kafka_1      | 2020-01-29T00:06:09.419419900Z   offsets.commit.timeout.ms = 5000
kafka_1      | 2020-01-29T00:06:09.419440600Z   offsets.load.buffer.size = 5242880
kafka_1      | 2020-01-29T00:06:09.419457000Z   offsets.retention.check.interval.ms = 600000
kafka_1      | 2020-01-29T00:06:09.419535400Z   offsets.retention.minutes = 10080
kafka_1      | 2020-01-29T00:06:09.419550900Z   offsets.topic.compression.codec = 0
kafka_1      | 2020-01-29T00:06:09.419566000Z   offsets.topic.num.partitions = 50
kafka_1      | 2020-01-29T00:06:09.419677100Z   offsets.topic.replication.factor = 1
kafka_1      | 2020-01-29T00:06:09.419700500Z   offsets.topic.segment.bytes = 104857600
kafka_1      | 2020-01-29T00:06:09.419721300Z   password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
kafka_1      | 2020-01-29T00:06:09.419768600Z   password.encoder.iterations = 4096
kafka_1      | 2020-01-29T00:06:09.419835500Z   password.encoder.key.length = 128
kafka_1      | 2020-01-29T00:06:09.419856800Z   password.encoder.keyfactory.algorithm = null
kafka_1      | 2020-01-29T00:06:09.419878300Z   password.encoder.old.secret = null
kafka_1      | 2020-01-29T00:06:09.419965900Z   password.encoder.secret = null
kafka_1      | 2020-01-29T00:06:09.419988100Z   port = 9092
kafka_1      | 2020-01-29T00:06:09.420008800Z   principal.builder.class = null
kafka_1      | 2020-01-29T00:06:09.420029900Z   producer.purgatory.purge.interval.requests = 1000
kafka_1      | 2020-01-29T00:06:09.420124200Z   queued.max.request.bytes = -1
kafka_1      | 2020-01-29T00:06:09.420139700Z   queued.max.requests = 500
kafka_1      | 2020-01-29T00:06:09.420161000Z   quota.consumer.default = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.420181700Z   quota.producer.default = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.420274600Z   quota.window.num = 11
kafka_1      | 2020-01-29T00:06:09.420295500Z   quota.window.size.seconds = 1
kafka_1      | 2020-01-29T00:06:09.420316200Z   replica.fetch.backoff.ms = 1000
kafka_1      | 2020-01-29T00:06:09.420336900Z   replica.fetch.max.bytes = 1048576
kafka_1      | 2020-01-29T00:06:09.420427200Z   replica.fetch.min.bytes = 1
kafka_1      | 2020-01-29T00:06:09.420448200Z   replica.fetch.response.max.bytes = 10485760
kafka_1      | 2020-01-29T00:06:09.420473900Z   replica.fetch.wait.max.ms = 500
kafka_1      | 2020-01-29T00:06:09.420550500Z   replica.high.watermark.checkpoint.interval.ms = 5000
kafka_1      | 2020-01-29T00:06:09.420572000Z   replica.lag.time.max.ms = 10000
kafka_1      | 2020-01-29T00:06:09.420684200Z   replica.selector.class = null
kafka_1      | 2020-01-29T00:06:09.420705200Z   replica.socket.receive.buffer.bytes = 65536
kafka_1      | 2020-01-29T00:06:09.420726000Z   replica.socket.timeout.ms = 30000
kafka_1      | 2020-01-29T00:06:09.420741000Z   replication.quota.window.num = 11
kafka_1      | 2020-01-29T00:06:09.420821300Z   replication.quota.window.size.seconds = 1
kafka_1      | 2020-01-29T00:06:09.420843200Z   request.timeout.ms = 30000
kafka_1      | 2020-01-29T00:06:09.420859700Z   reserved.broker.max.id = 1000
kafka_1      | 2020-01-29T00:06:09.420874400Z   sasl.client.callback.handler.class = null
kafka_1      | 2020-01-29T00:06:09.420962100Z   sasl.enabled.mechanisms = [GSSAPI]
kafka_1      | 2020-01-29T00:06:09.420977900Z   sasl.jaas.config = null
kafka_1      | 2020-01-29T00:06:09.420995400Z   sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka_1      | 2020-01-29T00:06:09.421010100Z   sasl.kerberos.min.time.before.relogin = 60000
kafka_1      | 2020-01-29T00:06:09.421025000Z   sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka_1      | 2020-01-29T00:06:09.421117000Z   sasl.kerberos.service.name = null
kafka_1      | 2020-01-29T00:06:09.421138300Z   sasl.kerberos.ticket.renew.jitter = 0.05
kafka_1      | 2020-01-29T00:06:09.421159000Z   sasl.kerberos.ticket.renew.window.factor = 0.8
kafka_1      | 2020-01-29T00:06:09.421239300Z   sasl.login.callback.handler.class = null
kafka_1      | 2020-01-29T00:06:09.421255000Z   sasl.login.class = null
kafka_1      | 2020-01-29T00:06:09.421272200Z   sasl.login.refresh.buffer.seconds = 300
kafka_1      | 2020-01-29T00:06:09.421293500Z   sasl.login.refresh.min.period.seconds = 60
kafka_1      | 2020-01-29T00:06:09.421335300Z   sasl.login.refresh.window.factor = 0.8
kafka_1      | 2020-01-29T00:06:09.421385300Z   sasl.login.refresh.window.jitter = 0.05
kafka_1      | 2020-01-29T00:06:09.421402600Z   sasl.mechanism.inter.broker.protocol = GSSAPI
kafka_1      | 2020-01-29T00:06:09.421423800Z   sasl.server.callback.handler.class = null
kafka_1      | 2020-01-29T00:06:09.421444500Z   security.inter.broker.protocol = PLAINTEXT
kafka_1      | 2020-01-29T00:06:09.421518000Z   security.providers = null
kafka_1      | 2020-01-29T00:06:09.421538900Z   socket.receive.buffer.bytes = 102400
kafka_1      | 2020-01-29T00:06:09.421564100Z   socket.request.max.bytes = 104857600
kafka_1      | 2020-01-29T00:06:09.421672800Z   socket.send.buffer.bytes = 102400
kafka_1      | 2020-01-29T00:06:09.421693800Z   ssl.cipher.suites = []
kafka_1      | 2020-01-29T00:06:09.421709400Z   ssl.client.auth = none
kafka_1      | 2020-01-29T00:06:09.421728900Z   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka_1      | 2020-01-29T00:06:09.421817500Z   ssl.endpoint.identification.algorithm = https
kafka_3      | 2020-01-29T00:06:09.392505200Z   zookeeper.set.acl = false
kafka_3      | 2020-01-29T00:06:09.392528900Z   zookeeper.sync.time.ms = 2000
kafka_3      | 2020-01-29T00:06:09.392553800Z  (kafka.server.KafkaConfig)
kafka_3      | 2020-01-29T00:06:09.515654000Z [2020-01-29 00:06:09,515] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_3      | 2020-01-29T00:06:09.522084400Z [2020-01-29 00:06:09,516] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_3      | 2020-01-29T00:06:09.523019600Z [2020-01-29 00:06:09,515] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_3      | 2020-01-29T00:06:09.598837900Z [2020-01-29 00:06:09,598] INFO Log directory /kafka/kafka-logs-dc7f7d8b4460 not found, creating it. (kafka.log.LogManager)
kafka_3      | 2020-01-29T00:06:09.630852600Z [2020-01-29 00:06:09,630] INFO Loading logs. (kafka.log.LogManager)
kafka_3      | 2020-01-29T00:06:09.653447200Z [2020-01-29 00:06:09,653] INFO Logs loading complete in 23 ms. (kafka.log.LogManager)
kafka_3      | 2020-01-29T00:06:09.705357100Z [2020-01-29 00:06:09,704] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka_3      | 2020-01-29T00:06:09.737906000Z [2020-01-29 00:06:09,737] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka_3      | 2020-01-29T00:06:11.148021900Z [2020-01-29 00:06:11,147] INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.Acceptor)
kafka_3      | 2020-01-29T00:06:11.256074200Z [2020-01-29 00:06:11,255] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9093,ListenerName(INSIDE),PLAINTEXT) (kafka.network.SocketServer)
kafka_3      | 2020-01-29T00:06:11.259291600Z [2020-01-29 00:06:11,258] INFO [SocketServer brokerId=1001] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
kafka_3      | 2020-01-29T00:06:11.310346900Z [2020-01-29 00:06:11,310] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_3      | 2020-01-29T00:06:11.314083600Z [2020-01-29 00:06:11,311] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_3      | 2020-01-29T00:06:11.329893000Z [2020-01-29 00:06:11,329] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_3      | 2020-01-29T00:06:11.333120400Z [2020-01-29 00:06:11,332] INFO [ExpirationReaper-1001-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_3      | 2020-01-29T00:06:11.399292100Z [2020-01-29 00:06:11,398] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka_3      | 2020-01-29T00:06:11.467047400Z [2020-01-29 00:06:11,466] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
kafka_3      | 2020-01-29T00:06:11.642707800Z [2020-01-29 00:06:11,642] INFO Stat of the created znode at /brokers/ids/1001 is: 61,61,1580256371509,1580256371509,1,0,0,72059874090483713,174,0,61
kafka_3      | 2020-01-29T00:06:11.642745800Z  (kafka.zk.KafkaZkClient)
kafka_3      | 2020-01-29T00:06:11.643987300Z [2020-01-29 00:06:11,643] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9093,ListenerName(INSIDE),PLAINTEXT)), czxid (broker epoch): 61 (kafka.zk.KafkaZkClient)
kafka_3      | 2020-01-29T00:06:11.881854500Z [2020-01-29 00:06:11,881] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_3      | 2020-01-29T00:06:11.900744900Z [2020-01-29 00:06:11,900] INFO [ExpirationReaper-1001-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_3      | 2020-01-29T00:06:11.907200300Z [2020-01-29 00:06:11,906] INFO [ExpirationReaper-1001-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_3      | 2020-01-29T00:06:12.056335900Z [2020-01-29 00:06:12,055] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka_3      | 2020-01-29T00:06:12.058729600Z [2020-01-29 00:06:12,058] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka_3      | 2020-01-29T00:06:12.098029800Z [2020-01-29 00:06:12,097] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 28 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka_3      | 2020-01-29T00:06:12.222536400Z [2020-01-29 00:06:12,222] INFO [ProducerId Manager 1001]: Acquired new producerId block (brokerId:1001,blockStartProducerId:2000,blockEndProducerId:2999) by writing to Zk with path version 3 (kafka.coordinator.transaction.ProducerIdManager)
kafka_3      | 2020-01-29T00:06:12.282183800Z [2020-01-29 00:06:12,281] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_3      | 2020-01-29T00:06:12.293031700Z [2020-01-29 00:06:12,292] INFO [Transaction Marker Channel Manager 1001]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka_3      | 2020-01-29T00:06:12.299970000Z [2020-01-29 00:06:12,299] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_3      | 2020-01-29T00:06:12.378076300Z [2020-01-29 00:06:12,377] INFO [ExpirationReaper-1001-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_3      | 2020-01-29T00:06:12.475375700Z [2020-01-29 00:06:12,475] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
kafka_3      | 2020-01-29T00:06:12.539920900Z [2020-01-29 00:06:12,539] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
kafka_3      | 2020-01-29T00:06:12.577095400Z [2020-01-29 00:06:12,576] INFO Kafka version: 2.4.0 (org.apache.kafka.common.utils.AppInfoParser)
kafka_3      | 2020-01-29T00:06:12.577417000Z [2020-01-29 00:06:12,577] INFO Kafka commitId: 77a89fcf8d7fa018 (org.apache.kafka.common.utils.AppInfoParser)
kafka_3      | 2020-01-29T00:06:12.577484300Z [2020-01-29 00:06:12,577] INFO Kafka startTimeMs: 1580256372540 (org.apache.kafka.common.utils.AppInfoParser)
kafka_3      | 2020-01-29T00:06:12.597367000Z [2020-01-29 00:06:12,596] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)
kafka_2      | 2020-01-29T00:06:09.468607800Z   log.message.timestamp.type = CreateTime
kafka_2      | 2020-01-29T00:06:09.468628500Z   log.preallocate = false
kafka_2      | 2020-01-29T00:06:09.468649200Z   log.retention.bytes = -1
kafka_2      | 2020-01-29T00:06:09.468669900Z   log.retention.check.interval.ms = 300000
kafka_2      | 2020-01-29T00:06:09.468688400Z   log.retention.hours = 168
kafka_2      | 2020-01-29T00:06:09.468767800Z   log.retention.minutes = null
kafka_2      | 2020-01-29T00:06:09.468783100Z   log.retention.ms = null
kafka_2      | 2020-01-29T00:06:09.468808500Z   log.roll.hours = 168
kafka_2      | 2020-01-29T00:06:09.468830000Z   log.roll.jitter.hours = 0
kafka_2      | 2020-01-29T00:06:09.468850700Z   log.roll.jitter.ms = null
kafka_2      | 2020-01-29T00:06:09.468871300Z   log.roll.ms = null
kafka_2      | 2020-01-29T00:06:09.468891800Z   log.segment.bytes = 1073741824
kafka_2      | 2020-01-29T00:06:09.468910100Z   log.segment.delete.delay.ms = 60000
kafka_2      | 2020-01-29T00:06:09.468925300Z   max.connections = 2147483647
kafka_2      | 2020-01-29T00:06:09.468946400Z   max.connections.per.ip = 2147483647
kafka_2      | 2020-01-29T00:06:09.468966900Z   max.connections.per.ip.overrides =
kafka_2      | 2020-01-29T00:06:09.468987500Z   max.incremental.fetch.session.cache.slots = 1000
kafka_2      | 2020-01-29T00:06:09.469008200Z   message.max.bytes = 1000012
kafka_2      | 2020-01-29T00:06:09.469028800Z   metric.reporters = []
kafka_2      | 2020-01-29T00:06:09.469049300Z   metrics.num.samples = 2
kafka_2      | 2020-01-29T00:06:09.469063700Z   metrics.recording.level = INFO
kafka_2      | 2020-01-29T00:06:09.469080900Z   metrics.sample.window.ms = 30000
kafka_2      | 2020-01-29T00:06:09.469101600Z   min.insync.replicas = 1
kafka_2      | 2020-01-29T00:06:09.469121200Z   num.io.threads = 8
kafka_2      | 2020-01-29T00:06:09.469142000Z   num.network.threads = 3
kafka_2      | 2020-01-29T00:06:09.469162500Z   num.partitions = 1
kafka_2      | 2020-01-29T00:06:09.469178900Z   num.recovery.threads.per.data.dir = 1
kafka_2      | 2020-01-29T00:06:09.469199800Z   num.replica.alter.log.dirs.threads = null
kafka_2      | 2020-01-29T00:06:09.469319100Z   num.replica.fetchers = 1
kafka_2      | 2020-01-29T00:06:09.469396100Z   offset.metadata.max.bytes = 4096
kafka_2      | 2020-01-29T00:06:09.469435100Z   offsets.commit.required.acks = -1
kafka_2      | 2020-01-29T00:06:09.469465500Z   offsets.commit.timeout.ms = 5000
kafka_2      | 2020-01-29T00:06:09.469494100Z   offsets.load.buffer.size = 5242880
kafka_2      | 2020-01-29T00:06:09.469520600Z   offsets.retention.check.interval.ms = 600000
kafka_2      | 2020-01-29T00:06:09.469546400Z   offsets.retention.minutes = 10080
kafka_2      | 2020-01-29T00:06:09.469571800Z   offsets.topic.compression.codec = 0
kafka_2      | 2020-01-29T00:06:09.469597100Z   offsets.topic.num.partitions = 50
kafka_2      | 2020-01-29T00:06:09.469624200Z   offsets.topic.replication.factor = 1
kafka_2      | 2020-01-29T00:06:09.469710900Z   offsets.topic.segment.bytes = 104857600
kafka_2      | 2020-01-29T00:06:09.469739100Z   password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
kafka_2      | 2020-01-29T00:06:09.469768700Z   password.encoder.iterations = 4096
kafka_2      | 2020-01-29T00:06:09.469821800Z   password.encoder.key.length = 128
kafka_2      | 2020-01-29T00:06:09.469848200Z   password.encoder.keyfactory.algorithm = null
kafka_2      | 2020-01-29T00:06:09.469873800Z   password.encoder.old.secret = null
kafka_2      | 2020-01-29T00:06:09.469899700Z   password.encoder.secret = null
kafka_2      | 2020-01-29T00:06:09.469924900Z   port = 9092
kafka_2      | 2020-01-29T00:06:09.469950200Z   principal.builder.class = null
kafka_2      | 2020-01-29T00:06:09.469975000Z   producer.purgatory.purge.interval.requests = 1000
kafka_2      | 2020-01-29T00:06:09.470000700Z   queued.max.request.bytes = -1
kafka_2      | 2020-01-29T00:06:09.470026600Z   queued.max.requests = 500
kafka_2      | 2020-01-29T00:06:09.470052600Z   quota.consumer.default = 9223372036854775807
kafka_2      | 2020-01-29T00:06:09.470078800Z   quota.producer.default = 9223372036854775807
kafka_2      | 2020-01-29T00:06:09.470104900Z   quota.window.num = 11
kafka_2      | 2020-01-29T00:06:09.470131300Z   quota.window.size.seconds = 1
kafka_2      | 2020-01-29T00:06:09.470157800Z   replica.fetch.backoff.ms = 1000
kafka_2      | 2020-01-29T00:06:09.470183900Z   replica.fetch.max.bytes = 1048576
kafka_2      | 2020-01-29T00:06:09.470210000Z   replica.fetch.min.bytes = 1
kafka_2      | 2020-01-29T00:06:09.470236700Z   replica.fetch.response.max.bytes = 10485760
kafka_2      | 2020-01-29T00:06:09.470262900Z   replica.fetch.wait.max.ms = 500
kafka_2      | 2020-01-29T00:06:09.470291600Z   replica.high.watermark.checkpoint.interval.ms = 5000
kafka_2      | 2020-01-29T00:06:09.470343900Z   replica.lag.time.max.ms = 10000
kafka_2      | 2020-01-29T00:06:09.470371300Z   replica.selector.class = null
kafka_2      | 2020-01-29T00:06:09.470397400Z   replica.socket.receive.buffer.bytes = 65536
kafka_2      | 2020-01-29T00:06:09.470424100Z   replica.socket.timeout.ms = 30000
kafka_2      | 2020-01-29T00:06:09.470450600Z   replication.quota.window.num = 11
kafka_2      | 2020-01-29T00:06:09.470475800Z   replication.quota.window.size.seconds = 1
kafka_2      | 2020-01-29T00:06:09.470511500Z   request.timeout.ms = 30000
kafka_2      | 2020-01-29T00:06:09.470538300Z   reserved.broker.max.id = 1000
kafka_2      | 2020-01-29T00:06:09.470560800Z   sasl.client.callback.handler.class = null
kafka_2      | 2020-01-29T00:06:09.470583800Z   sasl.enabled.mechanisms = [GSSAPI]
kafka_2      | 2020-01-29T00:06:09.470605700Z   sasl.jaas.config = null
kafka_2      | 2020-01-29T00:06:09.470631100Z   sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka_2      | 2020-01-29T00:06:09.470657100Z   sasl.kerberos.min.time.before.relogin = 60000
kafka_2      | 2020-01-29T00:06:09.470683000Z   sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka_2      | 2020-01-29T00:06:09.470709000Z   sasl.kerberos.service.name = null
kafka_2      | 2020-01-29T00:06:09.470728100Z   sasl.kerberos.ticket.renew.jitter = 0.05
kafka_2      | 2020-01-29T00:06:09.470756900Z   sasl.kerberos.ticket.renew.window.factor = 0.8
kafka_2      | 2020-01-29T00:06:09.470783900Z   sasl.login.callback.handler.class = null
kafka_2      | 2020-01-29T00:06:09.470868000Z   sasl.login.class = null
kafka_2      | 2020-01-29T00:06:09.470895100Z   sasl.login.refresh.buffer.seconds = 300
kafka_2      | 2020-01-29T00:06:09.470930500Z   sasl.login.refresh.min.period.seconds = 60
kafka_2      | 2020-01-29T00:06:09.470957600Z   sasl.login.refresh.window.factor = 0.8
kafka_2      | 2020-01-29T00:06:09.470984000Z   sasl.login.refresh.window.jitter = 0.05
kafka_2      | 2020-01-29T00:06:09.471006800Z   sasl.mechanism.inter.broker.protocol = GSSAPI
kafka_2      | 2020-01-29T00:06:09.471032400Z   sasl.server.callback.handler.class = null
kafka_2      | 2020-01-29T00:06:09.471058200Z   security.inter.broker.protocol = PLAINTEXT
kafka_2      | 2020-01-29T00:06:09.471084800Z   security.providers = null
kafka_2      | 2020-01-29T00:06:09.471115200Z   socket.receive.buffer.bytes = 102400
kafka_2      | 2020-01-29T00:06:09.471141600Z   socket.request.max.bytes = 104857600
kafka_2      | 2020-01-29T00:06:09.471167800Z   socket.send.buffer.bytes = 102400
kafka_2      | 2020-01-29T00:06:09.471188100Z   ssl.cipher.suites = []
kafka_2      | 2020-01-29T00:06:09.471213700Z   ssl.client.auth = none
kafka_2      | 2020-01-29T00:06:09.471236500Z   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka_2      | 2020-01-29T00:06:09.471256900Z   ssl.endpoint.identification.algorithm = https
kafka_2      | 2020-01-29T00:06:09.471286700Z   ssl.key.password = null
kafka_2      | 2020-01-29T00:06:09.471312900Z   ssl.keymanager.algorithm = SunX509
kafka_2      | 2020-01-29T00:06:09.471337700Z   ssl.keystore.location = null
kafka_1      | 2020-01-29T00:06:09.421838400Z   ssl.key.password = null
kafka_1      | 2020-01-29T00:06:09.421859100Z   ssl.keymanager.algorithm = SunX509
kafka_1      | 2020-01-29T00:06:09.421874200Z   ssl.keystore.location = null
kafka_1      | 2020-01-29T00:06:09.421965800Z   ssl.keystore.password = null
kafka_1      | 2020-01-29T00:06:09.421986900Z   ssl.keystore.type = JKS
kafka_1      | 2020-01-29T00:06:09.422001700Z   ssl.principal.mapping.rules = DEFAULT
kafka_1      | 2020-01-29T00:06:09.422023000Z   ssl.protocol = TLS
kafka_1      | 2020-01-29T00:06:09.422107600Z   ssl.provider = null
kafka_1      | 2020-01-29T00:06:09.422128500Z   ssl.secure.random.implementation = null
kafka_1      | 2020-01-29T00:06:09.422145100Z   ssl.trustmanager.algorithm = PKIX
kafka_1      | 2020-01-29T00:06:09.422165500Z   ssl.truststore.location = null
kafka_1      | 2020-01-29T00:06:09.422253900Z   ssl.truststore.password = null
kafka_1      | 2020-01-29T00:06:09.422279800Z   ssl.truststore.type = JKS
kafka_1      | 2020-01-29T00:06:09.422307600Z   transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
kafka_1      | 2020-01-29T00:06:09.422409700Z   transaction.max.timeout.ms = 900000
kafka_1      | 2020-01-29T00:06:09.422428600Z   transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka_1      | 2020-01-29T00:06:09.422454800Z   transaction.state.log.load.buffer.size = 5242880
kafka_1      | 2020-01-29T00:06:09.422547000Z   transaction.state.log.min.isr = 1
kafka_1      | 2020-01-29T00:06:09.422569900Z   transaction.state.log.num.partitions = 50
kafka_1      | 2020-01-29T00:06:09.422590700Z   transaction.state.log.replication.factor = 1
kafka_1      | 2020-01-29T00:06:09.422699200Z   transaction.state.log.segment.bytes = 104857600
kafka_1      | 2020-01-29T00:06:09.422720200Z   transactional.id.expiration.ms = 604800000
kafka_1      | 2020-01-29T00:06:09.422736000Z   unclean.leader.election.enable = false
kafka_1      | 2020-01-29T00:06:09.422758200Z   zookeeper.connect = zookeeper:2181
kafka_1      | 2020-01-29T00:06:09.422845300Z   zookeeper.connection.timeout.ms = 6000
kafka_1      | 2020-01-29T00:06:09.422866300Z   zookeeper.max.in.flight.requests = 10
kafka_1      | 2020-01-29T00:06:09.422888500Z   zookeeper.session.timeout.ms = 6000
kafka_1      | 2020-01-29T00:06:09.422905600Z   zookeeper.set.acl = false
kafka_1      | 2020-01-29T00:06:09.422993400Z   zookeeper.sync.time.ms = 2000
kafka_1      | 2020-01-29T00:06:09.423009800Z  (kafka.server.KafkaConfig)
kafka_1      | 2020-01-29T00:06:09.457955200Z [2020-01-29 00:06:09,457] INFO KafkaConfig values:
kafka_1      | 2020-01-29T00:06:09.457997900Z   advertised.host.name = null
kafka_1      | 2020-01-29T00:06:09.458019900Z   advertised.listeners = INSIDE://kafka:9093
kafka_1      | 2020-01-29T00:06:09.458106800Z   advertised.port = null
kafka_1      | 2020-01-29T00:06:09.458128200Z   alter.config.policy.class.name = null
kafka_1      | 2020-01-29T00:06:09.458148300Z   alter.log.dirs.replication.quota.window.num = 11
kafka_1      | 2020-01-29T00:06:09.458163600Z   alter.log.dirs.replication.quota.window.size.seconds = 1
kafka_1      | 2020-01-29T00:06:09.458253900Z   authorizer.class.name =
kafka_1      | 2020-01-29T00:06:09.458276100Z   auto.create.topics.enable = false
kafka_1      | 2020-01-29T00:06:09.458297000Z   auto.leader.rebalance.enable = true
kafka_1      | 2020-01-29T00:06:09.458317900Z   background.threads = 10
kafka_1      | 2020-01-29T00:06:09.458408000Z   broker.id = -1
kafka_1      | 2020-01-29T00:06:09.458429300Z   broker.id.generation.enable = true
kafka_1      | 2020-01-29T00:06:09.458449900Z   broker.rack = null
kafka_1      | 2020-01-29T00:06:09.458465600Z   client.quota.callback.class = null
kafka_1      | 2020-01-29T00:06:09.458547700Z   compression.type = producer
kafka_1      | 2020-01-29T00:06:09.458570500Z   connection.failed.authentication.delay.ms = 100
kafka_1      | 2020-01-29T00:06:09.458691100Z   connections.max.idle.ms = 600000
kafka_1      | 2020-01-29T00:06:09.458720500Z   connections.max.reauth.ms = 0
kafka_1      | 2020-01-29T00:06:09.458741400Z   control.plane.listener.name = null
kafka_1      | 2020-01-29T00:06:09.458834300Z   controlled.shutdown.enable = true
kafka_1      | 2020-01-29T00:06:09.458857500Z   controlled.shutdown.max.retries = 3
kafka_1      | 2020-01-29T00:06:09.458878200Z   controlled.shutdown.retry.backoff.ms = 5000
kafka_1      | 2020-01-29T00:06:09.458898900Z   controller.socket.timeout.ms = 30000
kafka_1      | 2020-01-29T00:06:09.458999200Z   create.topic.policy.class.name = null
kafka_1      | 2020-01-29T00:06:09.459021100Z   default.replication.factor = 1
kafka_1      | 2020-01-29T00:06:09.459037300Z   delegation.token.expiry.check.interval.ms = 3600000
kafka_1      | 2020-01-29T00:06:09.459061400Z   delegation.token.expiry.time.ms = 86400000
kafka_1      | 2020-01-29T00:06:09.459168600Z   delegation.token.master.key = null
kafka_1      | 2020-01-29T00:06:09.459190000Z   delegation.token.max.lifetime.ms = 604800000
kafka_1      | 2020-01-29T00:06:09.459206700Z   delete.records.purgatory.purge.interval.requests = 1
kafka_1      | 2020-01-29T00:06:09.459293200Z   delete.topic.enable = true
kafka_1      | 2020-01-29T00:06:09.459363700Z   fetch.purgatory.purge.interval.requests = 1000
kafka_1      | 2020-01-29T00:06:09.459385200Z   group.initial.rebalance.delay.ms = 0
kafka_1      | 2020-01-29T00:06:09.459401600Z   group.max.session.timeout.ms = 1800000
kafka_1      | 2020-01-29T00:06:09.459424400Z   group.max.size = 2147483647
kafka_1      | 2020-01-29T00:06:09.459511100Z   group.min.session.timeout.ms = 6000
kafka_1      | 2020-01-29T00:06:09.459532800Z   host.name =
kafka_1      | 2020-01-29T00:06:09.459553300Z   inter.broker.listener.name = INSIDE
kafka_1      | 2020-01-29T00:06:09.459569100Z   inter.broker.protocol.version = 2.4-IV1
kafka_1      | 2020-01-29T00:06:09.459713200Z   kafka.metrics.polling.interval.secs = 10
kafka_1      | 2020-01-29T00:06:09.459730800Z   kafka.metrics.reporters = []
kafka_1      | 2020-01-29T00:06:09.459745900Z   leader.imbalance.check.interval.seconds = 300
kafka_1      | 2020-01-29T00:06:09.459767200Z   leader.imbalance.per.broker.percentage = 10
kafka_1      | 2020-01-29T00:06:09.459868100Z   listener.security.protocol.map = INSIDE:PLAINTEXT
kafka_1      | 2020-01-29T00:06:09.459890100Z   listeners = INSIDE://0.0.0.0:9093
kafka_1      | 2020-01-29T00:06:09.459911000Z   log.cleaner.backoff.ms = 15000
kafka_1      | 2020-01-29T00:06:09.459927200Z   log.cleaner.dedupe.buffer.size = 134217728
kafka_1      | 2020-01-29T00:06:09.460019200Z   log.cleaner.delete.retention.ms = 86400000
kafka_1      | 2020-01-29T00:06:09.460034900Z   log.cleaner.enable = true
kafka_1      | 2020-01-29T00:06:09.460055700Z   log.cleaner.io.buffer.load.factor = 0.9
kafka_1      | 2020-01-29T00:06:09.460076900Z   log.cleaner.io.buffer.size = 524288
kafka_1      | 2020-01-29T00:06:09.460173800Z   log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka_1      | 2020-01-29T00:06:09.460198100Z   log.cleaner.max.compaction.lag.ms = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.460213000Z   log.cleaner.min.cleanable.ratio = 0.5
kafka_1      | 2020-01-29T00:06:09.460233700Z   log.cleaner.min.compaction.lag.ms = 0
kafka_1      | 2020-01-29T00:06:09.460326500Z   log.cleaner.threads = 1
kafka_1      | 2020-01-29T00:06:09.460341500Z   log.cleanup.policy = [delete]
kafka_1      | 2020-01-29T00:06:09.460362500Z   log.dir = /tmp/kafka-logs
kafka_1      | 2020-01-29T00:06:09.460383400Z   log.dirs = /kafka/kafka-logs-792b391a7b33
kafka_1      | 2020-01-29T00:06:09.460484900Z   log.flush.interval.messages = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.460512600Z   log.flush.interval.ms = null
kafka_1      | 2020-01-29T00:06:09.460533800Z   log.flush.offset.checkpoint.interval.ms = 60000
kafka_1      | 2020-01-29T00:06:09.460651000Z   log.flush.scheduler.interval.ms = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.460670500Z   log.flush.start.offset.checkpoint.interval.ms = 60000
kafka_1      | 2020-01-29T00:06:09.460691400Z   log.index.interval.bytes = 4096
kafka_1      | 2020-01-29T00:06:09.460712000Z   log.index.size.max.bytes = 10485760
kafka_1      | 2020-01-29T00:06:09.460807600Z   log.message.downconversion.enable = true
kafka_1      | 2020-01-29T00:06:09.460828700Z   log.message.format.version = 2.4-IV1
kafka_1      | 2020-01-29T00:06:09.460849800Z   log.message.timestamp.difference.max.ms = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.460870800Z   log.message.timestamp.type = CreateTime
kafka_1      | 2020-01-29T00:06:09.460989400Z   log.preallocate = false
kafka_1      | 2020-01-29T00:06:09.461010500Z   log.retention.bytes = -1
kafka_1      | 2020-01-29T00:06:09.461031300Z   log.retention.check.interval.ms = 300000
kafka_1      | 2020-01-29T00:06:09.461053600Z   log.retention.hours = 168
kafka_1      | 2020-01-29T00:06:09.461156800Z   log.retention.minutes = null
kafka_1      | 2020-01-29T00:06:09.461177900Z   log.retention.ms = null
kafka_1      | 2020-01-29T00:06:09.461196500Z   log.roll.hours = 168
kafka_1      | 2020-01-29T00:06:09.461291600Z   log.roll.jitter.hours = 0
kafka_1      | 2020-01-29T00:06:09.461313500Z   log.roll.jitter.ms = null
kafka_1      | 2020-01-29T00:06:09.461334500Z   log.roll.ms = null
kafka_1      | 2020-01-29T00:06:09.461355600Z   log.segment.bytes = 1073741824
kafka_2      | 2020-01-29T00:06:09.471368200Z   ssl.keystore.password = null
kafka_2      | 2020-01-29T00:06:09.471394900Z   ssl.keystore.type = JKS
kafka_2      | 2020-01-29T00:06:09.471417200Z   ssl.principal.mapping.rules = DEFAULT
kafka_2      | 2020-01-29T00:06:09.471441800Z   ssl.protocol = TLS
kafka_2      | 2020-01-29T00:06:09.471462200Z   ssl.provider = null
kafka_2      | 2020-01-29T00:06:09.471487600Z   ssl.secure.random.implementation = null
kafka_2      | 2020-01-29T00:06:09.471513700Z   ssl.trustmanager.algorithm = PKIX
kafka_2      | 2020-01-29T00:06:09.471549100Z   ssl.truststore.location = null
kafka_2      | 2020-01-29T00:06:09.471572400Z   ssl.truststore.password = null
kafka_2      | 2020-01-29T00:06:09.471601500Z   ssl.truststore.type = JKS
kafka_2      | 2020-01-29T00:06:09.471628500Z   transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
kafka_2      | 2020-01-29T00:06:09.471655200Z   transaction.max.timeout.ms = 900000
kafka_2      | 2020-01-29T00:06:09.471749700Z   transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka_2      | 2020-01-29T00:06:09.471780400Z   transaction.state.log.load.buffer.size = 5242880
kafka_2      | 2020-01-29T00:06:09.471806500Z   transaction.state.log.min.isr = 1
kafka_2      | 2020-01-29T00:06:09.471832700Z   transaction.state.log.num.partitions = 50
kafka_2      | 2020-01-29T00:06:09.471866400Z   transaction.state.log.replication.factor = 1
kafka_2      | 2020-01-29T00:06:09.471894200Z   transaction.state.log.segment.bytes = 104857600
kafka_2      | 2020-01-29T00:06:09.471945700Z   transactional.id.expiration.ms = 604800000
kafka_2      | 2020-01-29T00:06:09.471971500Z   unclean.leader.election.enable = false
kafka_2      | 2020-01-29T00:06:09.471991200Z   zookeeper.connect = zookeeper:2181
kafka_2      | 2020-01-29T00:06:09.472006800Z   zookeeper.connection.timeout.ms = 6000
kafka_2      | 2020-01-29T00:06:09.472027800Z   zookeeper.max.in.flight.requests = 10
kafka_2      | 2020-01-29T00:06:09.472048500Z   zookeeper.session.timeout.ms = 6000
kafka_2      | 2020-01-29T00:06:09.472069200Z   zookeeper.set.acl = false
kafka_2      | 2020-01-29T00:06:09.472091700Z   zookeeper.sync.time.ms = 2000
kafka_2      | 2020-01-29T00:06:09.472112400Z  (kafka.server.KafkaConfig)
kafka_2      | 2020-01-29T00:06:09.555418300Z [2020-01-29 00:06:09,555] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_2      | 2020-01-29T00:06:09.556253900Z [2020-01-29 00:06:09,555] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_2      | 2020-01-29T00:06:09.561572400Z [2020-01-29 00:06:09,561] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_2      | 2020-01-29T00:06:09.623553300Z [2020-01-29 00:06:09,623] INFO Log directory /kafka/kafka-logs-ef86f02f5d49 not found, creating it. (kafka.log.LogManager)
kafka_2      | 2020-01-29T00:06:09.649285200Z [2020-01-29 00:06:09,649] INFO Loading logs. (kafka.log.LogManager)
kafka_2      | 2020-01-29T00:06:09.688211900Z [2020-01-29 00:06:09,687] INFO Logs loading complete in 38 ms. (kafka.log.LogManager)
kafka_2      | 2020-01-29T00:06:09.755493200Z [2020-01-29 00:06:09,755] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka_2      | 2020-01-29T00:06:09.762722900Z [2020-01-29 00:06:09,762] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka_2      | 2020-01-29T00:06:11.150715500Z [2020-01-29 00:06:11,150] INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.Acceptor)
kafka_2      | 2020-01-29T00:06:11.254519600Z [2020-01-29 00:06:11,254] INFO [SocketServer brokerId=1003] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9093,ListenerName(INSIDE),PLAINTEXT) (kafka.network.SocketServer)
kafka_2      | 2020-01-29T00:06:11.258493800Z [2020-01-29 00:06:11,258] INFO [SocketServer brokerId=1003] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
kafka_2      | 2020-01-29T00:06:11.313194000Z [2020-01-29 00:06:11,312] INFO [ExpirationReaper-1003-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_2      | 2020-01-29T00:06:11.321759000Z [2020-01-29 00:06:11,320] INFO [ExpirationReaper-1003-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_2      | 2020-01-29T00:06:11.334731100Z [2020-01-29 00:06:11,334] INFO [ExpirationReaper-1003-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_2      | 2020-01-29T00:06:11.349273300Z [2020-01-29 00:06:11,347] INFO [ExpirationReaper-1003-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_2      | 2020-01-29T00:06:11.414534200Z [2020-01-29 00:06:11,414] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka_2      | 2020-01-29T00:06:11.482459100Z [2020-01-29 00:06:11,482] INFO Creating /brokers/ids/1003 (is it secure? false) (kafka.zk.KafkaZkClient)
kafka_2      | 2020-01-29T00:06:11.649463400Z [2020-01-29 00:06:11,649] INFO Stat of the created znode at /brokers/ids/1003 is: 63,63,1580256371511,1580256371511,1,0,0,72059874090483714,174,0,63
kafka_2      | 2020-01-29T00:06:11.649500700Z  (kafka.zk.KafkaZkClient)
kafka_2      | 2020-01-29T00:06:11.651101900Z [2020-01-29 00:06:11,650] INFO Registered broker 1003 at path /brokers/ids/1003 with addresses: ArrayBuffer(EndPoint(kafka,9093,ListenerName(INSIDE),PLAINTEXT)), czxid (broker epoch): 63 (kafka.zk.KafkaZkClient)
kafka_2      | 2020-01-29T00:06:11.907704900Z [2020-01-29 00:06:11,907] INFO [ExpirationReaper-1003-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_2      | 2020-01-29T00:06:11.932880700Z [2020-01-29 00:06:11,924] INFO [ExpirationReaper-1003-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_2      | 2020-01-29T00:06:11.932932200Z [2020-01-29 00:06:11,920] INFO [ExpirationReaper-1003-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_2      | 2020-01-29T00:06:12.028100100Z [2020-01-29 00:06:12,027] INFO [GroupCoordinator 1003]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka_2      | 2020-01-29T00:06:12.032303700Z [2020-01-29 00:06:12,032] INFO [GroupCoordinator 1003]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka_2      | 2020-01-29T00:06:12.055107900Z [2020-01-29 00:06:12,048] INFO [GroupMetadataManager brokerId=1003] Removed 0 expired offsets in 17 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka_2      | 2020-01-29T00:06:12.103128200Z [2020-01-29 00:06:12,102] INFO [ProducerId Manager 1003]: Acquired new producerId block (brokerId:1003,blockStartProducerId:1000,blockEndProducerId:1999) by writing to Zk with path version 2 (kafka.coordinator.transaction.ProducerIdManager)
kafka_2      | 2020-01-29T00:06:12.189252300Z [2020-01-29 00:06:12,188] INFO [TransactionCoordinator id=1003] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_2      | 2020-01-29T00:06:12.197179800Z [2020-01-29 00:06:12,196] INFO [TransactionCoordinator id=1003] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_2      | 2020-01-29T00:06:12.209801800Z [2020-01-29 00:06:12,209] INFO [Transaction Marker Channel Manager 1003]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka_2      | 2020-01-29T00:06:12.278322500Z [2020-01-29 00:06:12,278] INFO [ExpirationReaper-1003-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_2      | 2020-01-29T00:06:12.346776000Z [2020-01-29 00:06:12,346] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
kafka_2      | 2020-01-29T00:06:12.384549500Z [2020-01-29 00:06:12,384] INFO [SocketServer brokerId=1003] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
kafka_2      | 2020-01-29T00:06:12.407264700Z [2020-01-29 00:06:12,406] INFO Kafka version: 2.4.0 (org.apache.kafka.common.utils.AppInfoParser)
kafka_2      | 2020-01-29T00:06:12.407631300Z [2020-01-29 00:06:12,407] INFO Kafka commitId: 77a89fcf8d7fa018 (org.apache.kafka.common.utils.AppInfoParser)
kafka_2      | 2020-01-29T00:06:12.409392600Z [2020-01-29 00:06:12,409] INFO Kafka startTimeMs: 1580256372385 (org.apache.kafka.common.utils.AppInfoParser)
kafka_2      | 2020-01-29T00:06:12.416555100Z [2020-01-29 00:06:12,416] INFO [KafkaServer id=1003] started (kafka.server.KafkaServer)
kafka_2      | 2020-01-29T00:06:14.683284800Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:06:24.687910700Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:06:34.692666000Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:06:44.660473400Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:06:54.662917400Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:07:04.665918100Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:07:14.633386000Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:07:24.637465000Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:07:34.642746300Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:07:44.611708300Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:07:54.615593500Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:08:04.617892600Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:08:14.586521100Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:08:24.592347700Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:08:34.597380000Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:08:44.565786500Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:08:54.570942200Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:09:04.574886200Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:09:14.541887200Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:09:24.546615900Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:09:34.550708600Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:09:44.518967000Z waiting for kafka to be ready
kafka_2      | 2020-01-29T00:09:54.522952000Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.461495900Z   log.segment.delete.delay.ms = 60000
kafka_1      | 2020-01-29T00:06:09.461523900Z   max.connections = 2147483647
kafka_1      | 2020-01-29T00:06:09.461687100Z   max.connections.per.ip = 2147483647
kafka_1      | 2020-01-29T00:06:09.461717000Z   max.connections.per.ip.overrides =
kafka_1      | 2020-01-29T00:06:09.461738600Z   max.incremental.fetch.session.cache.slots = 1000
kafka_1      | 2020-01-29T00:06:09.461759600Z   message.max.bytes = 1000012
kafka_1      | 2020-01-29T00:06:09.461855400Z   metric.reporters = []
kafka_1      | 2020-01-29T00:06:09.461876700Z   metrics.num.samples = 2
kafka_1      | 2020-01-29T00:06:09.461897500Z   metrics.recording.level = INFO
kafka_1      | 2020-01-29T00:06:09.461920000Z   metrics.sample.window.ms = 30000
kafka_1      | 2020-01-29T00:06:09.462016900Z   min.insync.replicas = 1
kafka_1      | 2020-01-29T00:06:09.462037700Z   num.io.threads = 8
kafka_1      | 2020-01-29T00:06:09.462054800Z   num.network.threads = 3
kafka_1      | 2020-01-29T00:06:09.462093000Z   num.partitions = 1
kafka_1      | 2020-01-29T00:06:09.462239900Z   num.recovery.threads.per.data.dir = 1
kafka_1      | 2020-01-29T00:06:09.462261200Z   num.replica.alter.log.dirs.threads = null
kafka_1      | 2020-01-29T00:06:09.462282000Z   num.replica.fetchers = 1
kafka_1      | 2020-01-29T00:06:09.462394100Z   offset.metadata.max.bytes = 4096
kafka_1      | 2020-01-29T00:06:09.462419400Z   offsets.commit.required.acks = -1
kafka_1      | 2020-01-29T00:06:09.462439300Z   offsets.commit.timeout.ms = 5000
kafka_1      | 2020-01-29T00:06:09.462510800Z   offsets.load.buffer.size = 5242880
kafka_1      | 2020-01-29T00:06:09.462526400Z   offsets.retention.check.interval.ms = 600000
kafka_1      | 2020-01-29T00:06:09.462547100Z   offsets.retention.minutes = 10080
kafka_1      | 2020-01-29T00:06:09.462567800Z   offsets.topic.compression.codec = 0
kafka_1      | 2020-01-29T00:06:09.462587000Z   offsets.topic.num.partitions = 50
kafka_1      | 2020-01-29T00:06:09.462730800Z   offsets.topic.replication.factor = 1
kafka_1      | 2020-01-29T00:06:09.462865600Z   offsets.topic.segment.bytes = 104857600
kafka_1      | 2020-01-29T00:06:09.462889300Z   password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
kafka_1      | 2020-01-29T00:06:09.462903700Z   password.encoder.iterations = 4096
kafka_1      | 2020-01-29T00:06:09.462924500Z   password.encoder.key.length = 128
kafka_1      | 2020-01-29T00:06:09.463013700Z   password.encoder.keyfactory.algorithm = null
kafka_1      | 2020-01-29T00:06:09.463035600Z   password.encoder.old.secret = null
kafka_1      | 2020-01-29T00:06:09.463056300Z   password.encoder.secret = null
kafka_1      | 2020-01-29T00:06:09.463077100Z   port = 9092
kafka_1      | 2020-01-29T00:06:09.463166600Z   principal.builder.class = null
kafka_1      | 2020-01-29T00:06:09.463188500Z   producer.purgatory.purge.interval.requests = 1000
kafka_1      | 2020-01-29T00:06:09.463209200Z   queued.max.request.bytes = -1
kafka_1      | 2020-01-29T00:06:09.463223900Z   queued.max.requests = 500
kafka_1      | 2020-01-29T00:06:09.463257600Z   quota.consumer.default = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.463347200Z   quota.producer.default = 9223372036854775807
kafka_1      | 2020-01-29T00:06:09.463368100Z   quota.window.num = 11
kafka_1      | 2020-01-29T00:06:09.463388800Z   quota.window.size.seconds = 1
kafka_1      | 2020-01-29T00:06:09.463438700Z   replica.fetch.backoff.ms = 1000
kafka_1      | 2020-01-29T00:06:09.463493300Z   replica.fetch.max.bytes = 1048576
kafka_1      | 2020-01-29T00:06:09.463513700Z   replica.fetch.min.bytes = 1
kafka_1      | 2020-01-29T00:06:09.463534400Z   replica.fetch.response.max.bytes = 10485760
kafka_1      | 2020-01-29T00:06:09.463613300Z   replica.fetch.wait.max.ms = 500
kafka_1      | 2020-01-29T00:06:09.463662400Z   replica.high.watermark.checkpoint.interval.ms = 5000
kafka_1      | 2020-01-29T00:06:09.463683600Z   replica.lag.time.max.ms = 10000
kafka_1      | 2020-01-29T00:06:09.463704500Z   replica.selector.class = null
kafka_1      | 2020-01-29T00:06:09.463719800Z   replica.socket.receive.buffer.bytes = 65536
kafka_1      | 2020-01-29T00:06:09.463838000Z   replica.socket.timeout.ms = 30000
kafka_1      | 2020-01-29T00:06:09.463866600Z   replication.quota.window.num = 11
kafka_1      | 2020-01-29T00:06:09.463976900Z   replication.quota.window.size.seconds = 1
kafka_1      | 2020-01-29T00:06:09.464007100Z   request.timeout.ms = 30000
kafka_1      | 2020-01-29T00:06:09.464101400Z   reserved.broker.max.id = 1000
kafka_1      | 2020-01-29T00:06:09.464149000Z   sasl.client.callback.handler.class = null
kafka_1      | 2020-01-29T00:06:09.464172000Z   sasl.enabled.mechanisms = [GSSAPI]
kafka_1      | 2020-01-29T00:06:09.464281000Z   sasl.jaas.config = null
kafka_1      | 2020-01-29T00:06:09.464310400Z   sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka_1      | 2020-01-29T00:06:09.464441900Z   sasl.kerberos.min.time.before.relogin = 60000
kafka_1      | 2020-01-29T00:06:09.464470500Z   sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka_1      | 2020-01-29T00:06:09.464497100Z   sasl.kerberos.service.name = null
kafka_1      | 2020-01-29T00:06:09.464687500Z   sasl.kerberos.ticket.renew.jitter = 0.05
kafka_1      | 2020-01-29T00:06:09.464711700Z   sasl.kerberos.ticket.renew.window.factor = 0.8
kafka_1      | 2020-01-29T00:06:09.464737600Z   sasl.login.callback.handler.class = null
kafka_1      | 2020-01-29T00:06:09.464853900Z   sasl.login.class = null
kafka_1      | 2020-01-29T00:06:09.464884100Z   sasl.login.refresh.buffer.seconds = 300
kafka_3      | 2020-01-29T00:06:13.117472300Z [2020-01-29 00:06:13,116] ERROR [KafkaApi-1001] Error when handling request: clientId=1002, correlationId=0, api=UPDATE_METADATA, version=6, body={controller_id=1002,controller_epoch=1,broker_epoch=62,topic_states=[],live_brokers=[{id=1001,endpoints=[{port=9093,host=kafka,listener=INSIDE,security_protocol=0,_tagged_fields={}}],rack=null,_tagged_fields={}},{id=1002,endpoints=[{port=9093,host=kafka,listener=INSIDE,security_protocol=0,_tagged_fields={}}],rack=null,_tagged_fields={}},{id=1003,endpoints=[{port=9093,host=kafka,listener=INSIDE,security_protocol=0,_tagged_fields={}}],rack=null,_tagged_fields={}}],_tagged_fields={}} (kafka.server.KafkaApis)
kafka_1      | 2020-01-29T00:06:09.464911400Z   sasl.login.refresh.min.period.seconds = 60
kafka_3      | 2020-01-29T00:06:13.117511100Z java.lang.IllegalStateException: Epoch 62 larger than current broker epoch 61
kafka_1      | 2020-01-29T00:06:09.465023000Z   sasl.login.refresh.window.factor = 0.8
kafka_3      | 2020-01-29T00:06:13.117536000Z   at kafka.server.KafkaApis.isBrokerEpochStale(KafkaApis.scala:2915)
kafka_1      | 2020-01-29T00:06:09.465052200Z   sasl.login.refresh.window.jitter = 0.05
kafka_3      | 2020-01-29T00:06:13.117557500Z   at kafka.server.KafkaApis.handleUpdateMetadataRequest(KafkaApis.scala:267)
kafka_1      | 2020-01-29T00:06:09.465113000Z   sasl.mechanism.inter.broker.protocol = GSSAPI
kafka_3      | 2020-01-29T00:06:13.117654900Z   at kafka.server.KafkaApis.handle(KafkaApis.scala:132)
kafka_1      | 2020-01-29T00:06:09.465187300Z   sasl.server.callback.handler.class = null
kafka_3      | 2020-01-29T00:06:13.117672500Z   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:70)
kafka_1      | 2020-01-29T00:06:09.465212900Z   security.inter.broker.protocol = PLAINTEXT
kafka_3      | 2020-01-29T00:06:13.117691800Z   at java.lang.Thread.run(Thread.java:748)
kafka_1      | 2020-01-29T00:06:09.465242400Z   security.providers = null
kafka_3      | 2020-01-29T00:06:13.119845200Z [2020-01-29 00:06:13,117] ERROR [KafkaApi-1001] Error when handling request: clientId=1002, correlationId=0, api=UPDATE_METADATA, version=6, body={controller_id=1002,controller_epoch=1,broker_epoch=63,topic_states=[],live_brokers=[{id=1001,endpoints=[{port=9093,host=kafka,listener=INSIDE,security_protocol=0,_tagged_fields={}}],rack=null,_tagged_fields={}},{id=1002,endpoints=[{port=9093,host=kafka,listener=INSIDE,security_protocol=0,_tagged_fields={}}],rack=null,_tagged_fields={}},{id=1003,endpoints=[{port=9093,host=kafka,listener=INSIDE,security_protocol=0,_tagged_fields={}}],rack=null,_tagged_fields={}}],_tagged_fields={}} (kafka.server.KafkaApis)
kafka_1      | 2020-01-29T00:06:09.465344200Z   socket.receive.buffer.bytes = 102400
kafka_3      | 2020-01-29T00:06:13.119874000Z java.lang.IllegalStateException: Epoch 63 larger than current broker epoch 61
kafka_1      | 2020-01-29T00:06:09.465371700Z   socket.request.max.bytes = 104857600
kafka_3      | 2020-01-29T00:06:13.119894300Z   at kafka.server.KafkaApis.isBrokerEpochStale(KafkaApis.scala:2915)
kafka_1      | 2020-01-29T00:06:09.465400600Z   socket.send.buffer.bytes = 102400
kafka_3      | 2020-01-29T00:06:13.119921200Z   at kafka.server.KafkaApis.handleUpdateMetadataRequest(KafkaApis.scala:267)
kafka_1      | 2020-01-29T00:06:09.465504800Z   ssl.cipher.suites = []
kafka_3      | 2020-01-29T00:06:13.119951100Z   at kafka.server.KafkaApis.handle(KafkaApis.scala:132)
kafka_1      | 2020-01-29T00:06:09.465530400Z   ssl.client.auth = none
kafka_3      | 2020-01-29T00:06:13.119968100Z   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:70)
kafka_1      | 2020-01-29T00:06:09.465553200Z   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka_3      | 2020-01-29T00:06:13.119992700Z   at java.lang.Thread.run(Thread.java:748)
kafka_1      | 2020-01-29T00:06:09.465683400Z   ssl.endpoint.identification.algorithm = https
kafka_3      | 2020-01-29T00:06:14.845291000Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.465706700Z   ssl.key.password = null
kafka_3      | 2020-01-29T00:06:24.850223200Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.465732100Z   ssl.keymanager.algorithm = SunX509
kafka_3      | 2020-01-29T00:06:34.821999700Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.465841600Z   ssl.keystore.location = null
kafka_3      | 2020-01-29T00:06:44.825740200Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.465866900Z   ssl.keystore.password = null
kafka_3      | 2020-01-29T00:06:54.830494100Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.465889900Z   ssl.keystore.type = JKS
kafka_3      | 2020-01-29T00:07:04.800529500Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.465913700Z   ssl.principal.mapping.rules = DEFAULT
kafka_3      | 2020-01-29T00:07:14.806684300Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466028600Z   ssl.protocol = TLS
kafka_3      | 2020-01-29T00:07:24.810211400Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466055500Z   ssl.provider = null
kafka_3      | 2020-01-29T00:07:34.780129700Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466085900Z   ssl.secure.random.implementation = null
kafka_3      | 2020-01-29T00:07:44.785379700Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466111700Z   ssl.trustmanager.algorithm = PKIX
kafka_3      | 2020-01-29T00:07:54.791632600Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466143800Z   ssl.truststore.location = null
kafka_3      | 2020-01-29T00:08:04.762520200Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466250200Z   ssl.truststore.password = null
kafka_3      | 2020-01-29T00:08:14.767767000Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466276500Z   ssl.truststore.type = JKS
kafka_3      | 2020-01-29T00:08:24.772290200Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466303000Z   transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
kafka_3      | 2020-01-29T00:08:34.740634800Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466339900Z   transaction.max.timeout.ms = 900000
kafka_3      | 2020-01-29T00:08:44.743723500Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466366600Z   transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka_3      | 2020-01-29T00:08:54.747530200Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466398700Z   transaction.state.log.load.buffer.size = 5242880
kafka_3      | 2020-01-29T00:09:04.716281100Z waiting for kafka to be ready
kafka_3      | 2020-01-29T00:09:14.722164800Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466424700Z   transaction.state.log.min.isr = 1
kafka_3      | 2020-01-29T00:09:24.727023300Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466446600Z   transaction.state.log.num.partitions = 50
kafka_3      | 2020-01-29T00:09:34.697298400Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466472800Z   transaction.state.log.replication.factor = 1
kafka_3      | 2020-01-29T00:09:44.704472600Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466665100Z   transaction.state.log.segment.bytes = 104857600
kafka_3      | 2020-01-29T00:09:54.710342200Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:09.466691700Z   transactional.id.expiration.ms = 604800000
kafka_1      | 2020-01-29T00:06:09.466717200Z   unclean.leader.election.enable = false
kafka_1      | 2020-01-29T00:06:09.466736800Z   zookeeper.connect = zookeeper:2181
kafka_1      | 2020-01-29T00:06:09.466761700Z   zookeeper.connection.timeout.ms = 6000
kafka_1      | 2020-01-29T00:06:09.466784300Z   zookeeper.max.in.flight.requests = 10
kafka_1      | 2020-01-29T00:06:09.466809000Z   zookeeper.session.timeout.ms = 6000
kafka_1      | 2020-01-29T00:06:09.466834600Z   zookeeper.set.acl = false
kafka_1      | 2020-01-29T00:06:09.466860000Z   zookeeper.sync.time.ms = 2000
kafka_1      | 2020-01-29T00:06:09.466885700Z  (kafka.server.KafkaConfig)
kafka_1      | 2020-01-29T00:06:09.559072700Z [2020-01-29 00:06:09,558] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1      | 2020-01-29T00:06:09.561060700Z [2020-01-29 00:06:09,560] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1      | 2020-01-29T00:06:09.569569200Z [2020-01-29 00:06:09,569] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1      | 2020-01-29T00:06:09.635860100Z [2020-01-29 00:06:09,635] INFO Log directory /kafka/kafka-logs-792b391a7b33 not found, creating it. (kafka.log.LogManager)
kafka_1      | 2020-01-29T00:06:09.656310500Z [2020-01-29 00:06:09,655] INFO Loading logs. (kafka.log.LogManager)
kafka_1      | 2020-01-29T00:06:09.678651500Z [2020-01-29 00:06:09,678] INFO Logs loading complete in 22 ms. (kafka.log.LogManager)
kafka_1      | 2020-01-29T00:06:09.717345800Z [2020-01-29 00:06:09,717] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka_1      | 2020-01-29T00:06:09.725253800Z [2020-01-29 00:06:09,724] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka_1      | 2020-01-29T00:06:11.149639300Z [2020-01-29 00:06:11,149] INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.Acceptor)
kafka_1      | 2020-01-29T00:06:11.240546400Z [2020-01-29 00:06:11,240] INFO [SocketServer brokerId=1002] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9093,ListenerName(INSIDE),PLAINTEXT) (kafka.network.SocketServer)
kafka_1      | 2020-01-29T00:06:11.244843200Z [2020-01-29 00:06:11,244] INFO [SocketServer brokerId=1002] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
kafka_1      | 2020-01-29T00:06:11.308478100Z [2020-01-29 00:06:11,308] INFO [ExpirationReaper-1002-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | 2020-01-29T00:06:11.316158100Z [2020-01-29 00:06:11,314] INFO [ExpirationReaper-1002-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | 2020-01-29T00:06:11.317218700Z [2020-01-29 00:06:11,317] INFO [ExpirationReaper-1002-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | 2020-01-29T00:06:11.325073700Z [2020-01-29 00:06:11,322] INFO [ExpirationReaper-1002-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | 2020-01-29T00:06:11.400150500Z [2020-01-29 00:06:11,399] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka_1      | 2020-01-29T00:06:11.471218700Z [2020-01-29 00:06:11,470] INFO Creating /brokers/ids/1002 (is it secure? false) (kafka.zk.KafkaZkClient)
kafka_1      | 2020-01-29T00:06:11.644811600Z [2020-01-29 00:06:11,644] INFO Stat of the created znode at /brokers/ids/1002 is: 62,62,1580256371511,1580256371511,1,0,0,72059874090483712,174,0,62
kafka_1      | 2020-01-29T00:06:11.644842100Z  (kafka.zk.KafkaZkClient)
kafka_1      | 2020-01-29T00:06:11.646304600Z [2020-01-29 00:06:11,645] INFO Registered broker 1002 at path /brokers/ids/1002 with addresses: ArrayBuffer(EndPoint(kafka,9093,ListenerName(INSIDE),PLAINTEXT)), czxid (broker epoch): 62 (kafka.zk.KafkaZkClient)
kafka_1      | 2020-01-29T00:06:11.903513500Z [2020-01-29 00:06:11,888] INFO [ExpirationReaper-1002-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | 2020-01-29T00:06:11.905432000Z [2020-01-29 00:06:11,903] INFO [ExpirationReaper-1002-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | 2020-01-29T00:06:11.907901000Z [2020-01-29 00:06:11,907] INFO [ExpirationReaper-1002-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | 2020-01-29T00:06:11.958011600Z [2020-01-29 00:06:11,954] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
kafka_1      | 2020-01-29T00:06:11.989958900Z [2020-01-29 00:06:11,989] INFO [GroupCoordinator 1002]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka_1      | 2020-01-29T00:06:11.992038700Z [2020-01-29 00:06:11,991] INFO [GroupCoordinator 1002]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka_1      | 2020-01-29T00:06:12.054399000Z [2020-01-29 00:06:12,053] INFO [GroupMetadataManager brokerId=1002] Removed 0 expired offsets in 62 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka_1      | 2020-01-29T00:06:12.071769800Z [2020-01-29 00:06:12,071] INFO [ProducerId Manager 1002]: Acquired new producerId block (brokerId:1002,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
kafka_1      | 2020-01-29T00:06:12.252256000Z [2020-01-29 00:06:12,251] INFO [TransactionCoordinator id=1002] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1      | 2020-01-29T00:06:12.294172100Z [2020-01-29 00:06:12,293] INFO [TransactionCoordinator id=1002] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1      | 2020-01-29T00:06:12.310518100Z [2020-01-29 00:06:12,310] INFO [Transaction Marker Channel Manager 1002]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka_1      | 2020-01-29T00:06:12.426854800Z [2020-01-29 00:06:12,426] INFO [ExpirationReaper-1002-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | 2020-01-29T00:06:12.643056700Z [2020-01-29 00:06:12,642] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
kafka_1      | 2020-01-29T00:06:12.725927900Z [2020-01-29 00:06:12,725] INFO [SocketServer brokerId=1002] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
kafka_1      | 2020-01-29T00:06:12.742102600Z [2020-01-29 00:06:12,741] INFO Kafka version: 2.4.0 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1      | 2020-01-29T00:06:12.752161100Z [2020-01-29 00:06:12,751] INFO Kafka commitId: 77a89fcf8d7fa018 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1      | 2020-01-29T00:06:12.752715100Z [2020-01-29 00:06:12,752] INFO Kafka startTimeMs: 1580256372727 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1      | 2020-01-29T00:06:12.756012600Z [2020-01-29 00:06:12,755] INFO [KafkaServer id=1002] started (kafka.server.KafkaServer)
kafka_1      | 2020-01-29T00:06:14.779748300Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:24.783067300Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:34.750255700Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:44.753860600Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:06:54.756409800Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:07:04.723552300Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:07:14.726078200Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:07:24.732014600Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:07:34.702656200Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:07:44.706572300Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:07:54.711261700Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:08:04.678730000Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:08:14.683392000Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:08:24.686529400Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:08:34.657812800Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:08:44.661607100Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:08:54.664847800Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:09:04.633774400Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:09:14.637704100Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:09:24.640596300Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:09:34.610047600Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:09:44.614563300Z waiting for kafka to be ready
kafka_1      | 2020-01-29T00:09:54.619804700Z waiting for kafka to be ready
OneCricketeer commented 4 years ago

You have three containers each exposing the same ports and the same name. If you're using a single machine, what's the point of running multiple brokers?

raginjason commented 4 years ago

You have three containers each exposing the same ports and the same name. If you're using a single machine, what's the point of running multiple brokers?

I'm not exposing any ports at all actually. Am I missing something here?

OneCricketeer commented 4 years ago

How are you expecting to actually send data to Kafka without exposing ports? Similarly, Zookeeper needs ports and environment variables itself to function well

raginjason commented 4 years ago

How are you expecting to actually send data to Kafka without exposing ports? Similarly, Zookeeper needs ports and environment variables itself to function well

By starting the consumers up in other Docker Compose services along side Kafka and Zk

OneCricketeer commented 4 years ago

That makes sense.

Okay, so other thing is that KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093 is telling each container that it is waiting for a container to connect to kafka:9093... But your services are actually named kafka_{1..3}

sscaling commented 4 years ago

This looks to have been discussed and the issue identified. I don't think the scale parameter will be your friend here because of the way the container names (and subsequently routable hostnames in the bridge network) get created. In the tutorial / readme this works because the README uses your public IP (on the LAN) as the advertised host so both clients inside and outside the docker network can route to the brokers. This is explained in the second use case in the Connectivity Guide. If you don't need / want to expose the kafka port, i've typically seen people just defining multiple kafka services with different names. There are various examples in the closed/open issues.

Closing due to staleness as no further response from OP.