Closed mjallday closed 5 years ago
I git cloned the repo and ran
export DOCKER_HOST_IP=$(docker-machine ip dev) docker-compose -f full-stack.yml up
The stack runs but crashes. Looks like something to do with the logs not initializing correctly.
===> ENV Variables ... ALLOW_UNSIGNED=false COMPONENT=kafka CONFLUENT_DEB_VERSION=1 CONFLUENT_MAJOR_VERSION=5 CONFLUENT_MINOR_VERSION=1 CONFLUENT_MVN_LABEL= CONFLUENT_PATCH_VERSION=0 CONFLUENT_PLATFORM_LABEL= CONFLUENT_VERSION=5.1.0 CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar HOME=/root HOSTNAME=kafka1 KAFKA_ADVERTISED_LISTENERS=LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://192.168.99.100:9092 KAFKA_BROKER_ID=1 KAFKA_INTER_BROKER_LISTENER_NAME=LISTENER_DOCKER_INTERNAL KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT KAFKA_LOG4J_LOGGERS=kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 KAFKA_VERSION=2.1.0 KAFKA_ZOOKEEPER_CONNECT=zoo1:2181 LANG=C.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PWD=/ PYTHON_PIP_VERSION=8.1.2 PYTHON_VERSION=2.7.9-1 SCALA_VERSION=2.11 SHLVL=1 ZULU_OPENJDK_VERSION=8=8.30.0.1 _=/usr/bin/env ===> User uid=0(root) gid=0(root) groups=0(root) ===> Configuring ... ===> Running preflight checks ... ===> Check if /var/lib/kafka/data is writable ... ===> Check if Zookeeper is healthy ... ===> Launching ... ===> Launching kafka ... [2019-02-16 19:28:34,619] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [2019-02-16 19:28:35,572] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://192.168.99.100:9092 advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = 1 broker.id.generation.enable = true broker.interceptor.class = class org.apache.kafka.server.interceptor.DefaultBrokerInterceptor broker.rack = null client.quota.callback.class = null compression.type = producer connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = LISTENER_DOCKER_INTERNAL inter.broker.protocol.version = 2.1-IV2 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT listeners = LISTENER_DOCKER_INTERNAL://0.0.0.0:19092,LISTENER_DOCKER_EXTERNAL://0.0.0.0:9092 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /var/lib/kafka/data log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.1-IV2 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000012 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 9092 principal.builder.class = null producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = zoo1:2181 zookeeper.connection.timeout.ms = null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 6000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-02-16 19:28:35,763] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig) [2019-02-16 19:28:35,831] WARN Please note that the support metrics collection feature ("Metrics") of Proactive Support is enabled. With Metrics enabled, this broker is configured to collect and report certain broker and cluster metadata ("Metadata") about your use of the Confluent Platform (including without limitation, your remote internet protocol address) to Confluent, Inc. ("Confluent") or its parent, subsidiaries, affiliates or service providers every 24hours. This Metadata may be transferred to any country in which Confluent maintains facilities. For a more in depth discussion of how Confluent processes such information, please read our Privacy Policy located at http://www.confluent.io/privacy. By proceeding with `confluent.support.metrics.enable=true`, you agree to all such collection, transfer, storage and use of Metadata by Confluent. You can turn the Metrics feature off by setting `confluent.support.metrics.enable=false` in the broker configuration and restarting the broker. See the Confluent Platform documentation for further information. (io.confluent.support.metrics.SupportedServerStartable) [2019-02-16 19:28:35,845] INFO starting (kafka.server.KafkaServer) [2019-02-16 19:28:35,847] INFO Connecting to zookeeper on zoo1:2181 (kafka.server.KafkaServer) [2019-02-16 19:28:35,898] INFO [ZooKeeperClient] Initializing a new session to zoo1:2181. (kafka.zookeeper.ZooKeeperClient) [2019-02-16 19:28:35,923] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,924] INFO Client environment:host.name=kafka1 (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,924] INFO Client environment:java.version=1.8.0_172 (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,924] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,924] INFO Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,924] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/javax.annotation-api-1.2.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.7.2.jar:/usr/bin/../share/java/kafka/lz4-java-1.5.0.jar:/usr/bin/../share/java/kafka/javax.inject-1.jar:/usr/bin/../share/java/kafka/jersey-server-2.27.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/httpmime-4.5.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.25.jar:/usr/bin/../share/java/kafka/common-utils-5.1.0.jar:/usr/bin/../share/java/kafka/connect-runtime-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/support-metrics-common-5.1.0.jar:/usr/bin/../share/java/kafka/netty-3.10.6.Final.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.1.0-cp1-test-sources.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.1.0-cp1-test.jar:/usr/bin/../share/java/kafka/connect-api-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/httpclient-4.5.2.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.27.jar:/usr/bin/../share/java/kafka/connect-json-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/javassist-3.22.0-CR2.jar:/usr/bin/../share/java/kafka/scala-library-2.11.12.jar:/usr/bin/../share/java/kafka/guava-20.0.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/kafka-clients-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/reflections-0.9.11.jar:/usr/bin/../share/java/kafka/commons-compress-1.8.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/connect-file-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.1.0-cp1-javadoc.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.1.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.8.3.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/commons-codec-1.9.jar:/usr/bin/../share/java/kafka/jackson-databind-2.9.7.jar:/usr/bin/../share/java/kafka/xz-1.5.jar:/usr/bin/../share/java/kafka/hk2-utils-2.5.0-b42.jar:/usr/bin/../share/java/kafka/zkclient-0.10.jar:/usr/bin/../share/java/kafka/scala-reflect-2.11.12.jar:/usr/bin/../share/java/kafka/log4j-1.2.17.jar:/usr/bin/../share/java/kafka/commons-validator-1.4.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.4.13.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/jackson-core-asl-1.9.13.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/jackson-mapper-asl-1.9.13.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.1.0-cp1-scaladoc.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.0.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.1.0-cp1-sources.jar:/usr/bin/../share/java/kafka/javax.inject-2.5.0-b42.jar:/usr/bin/../share/java/kafka/paranamer-2.7.jar:/usr/bin/../share/java/kafka/jline-0.9.94.jar:/usr/bin/../share/java/kafka/hk2-api-2.5.0-b42.jar:/usr/bin/../share/java/kafka/plexus-utils-3.1.0.jar:/usr/bin/../share/java/kafka/jersey-common-2.27.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/httpcore-4.4.4.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/audience-annotations-0.5.0.jar:/usr/bin/../share/java/kafka/kafka-streams-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/scala-logging_2.11-3.9.0.jar:/usr/bin/../share/java/kafka/support-metrics-client-5.1.0.jar:/usr/bin/../share/java/kafka/avro-1.8.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.5.jar:/usr/bin/../share/java/kafka/jackson-core-2.9.7.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.9.7.jar:/usr/bin/../share/java/kafka/maven-artifact-3.5.4.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.27.jar:/usr/bin/../share/java/kafka/validation-api-1.1.0.Final.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.27.jar:/usr/bin/../share/java/kafka/kafka-tools-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/rocksdbjni-5.14.2.jar:/usr/bin/../share/java/kafka/slf4j-log4j12-1.7.25.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.9.7.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.5.0-b42.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.9.7.jar:/usr/bin/../share/java/kafka/jersey-media-jaxb-2.27.jar:/usr/bin/../share/java/kafka/jersey-client-2.27.jar:/usr/bin/../share/java/kafka/hk2-locator-2.5.0-b42.jar:/usr/bin/../share/java/kafka/connect-transforms-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.9.7.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.3.5-4.jar:/usr/bin/../share/java/kafka/commons-lang3-3.1.jar:/usr/bin/../share/java/kafka/commons-digester-1.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.11-2.1.0-cp1.jar:/usr/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/* (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,927] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,928] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,928] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,928] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,929] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,929] INFO Client environment:os.version=4.4.89-boot2docker (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,929] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,929] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,929] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,931] INFO Initiating client connection, connectString=zoo1:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@dd05255 (org.apache.zookeeper.ZooKeeper) [2019-02-16 19:28:35,979] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) [2019-02-16 19:28:35,985] INFO Opening socket connection to server zoo1/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [2019-02-16 19:28:35,997] INFO Socket connection established to zoo1/172.18.0.2:2181, initiating session (org.apache.zookeeper.ClientCnxn) [2019-02-16 19:28:36,013] INFO Session establishment complete on server zoo1/172.18.0.2:2181, sessionid = 0x168f7c7d2160001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn) [2019-02-16 19:28:36,016] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient) [2019-02-16 19:28:36,880] INFO Cluster ID = c7GS8FhATlqZfshc6U3_2g (kafka.server.KafkaServer) [2019-02-16 19:28:36,886] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) [2019-02-16 19:28:37,034] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://192.168.99.100:9092 advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = 1 broker.id.generation.enable = true broker.interceptor.class = class org.apache.kafka.server.interceptor.DefaultBrokerInterceptor broker.rack = null client.quota.callback.class = null compression.type = producer connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = LISTENER_DOCKER_INTERNAL inter.broker.protocol.version = 2.1-IV2 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT listeners = LISTENER_DOCKER_INTERNAL://0.0.0.0:19092,LISTENER_DOCKER_EXTERNAL://0.0.0.0:9092 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /var/lib/kafka/data log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.1-IV2 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000012 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 9092 principal.builder.class = null producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = zoo1:2181 zookeeper.connection.timeout.ms = null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 6000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-02-16 19:28:37,066] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://192.168.99.100:9092 advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = 1 broker.id.generation.enable = true broker.interceptor.class = class org.apache.kafka.server.interceptor.DefaultBrokerInterceptor broker.rack = null client.quota.callback.class = null compression.type = producer connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = LISTENER_DOCKER_INTERNAL inter.broker.protocol.version = 2.1-IV2 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT listeners = LISTENER_DOCKER_INTERNAL://0.0.0.0:19092,LISTENER_DOCKER_EXTERNAL://0.0.0.0:9092 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /var/lib/kafka/data log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.1-IV2 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000012 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 9092 principal.builder.class = null producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = zoo1:2181 zookeeper.connection.timeout.ms = null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 6000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-02-16 19:28:37,145] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-02-16 19:28:37,145] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-02-16 19:28:37,154] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-02-16 19:28:37,236] INFO Loading logs. (kafka.log.LogManager) [2019-02-16 19:28:37,317] INFO Logs loading complete in 81 ms. (kafka.log.LogManager) [2019-02-16 19:28:37,411] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) [2019-02-16 19:28:37,414] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) [2019-02-16 19:28:37,421] INFO Starting the log cleaner (kafka.log.LogCleaner) [2019-02-16 19:28:38,196] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) [2019-02-16 19:28:39,219] INFO Awaiting socket connections on 0.0.0.0:19092. (kafka.network.Acceptor) [2019-02-16 19:28:39,336] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor) [2019-02-16 19:28:39,429] INFO [SocketServer brokerId=1] Started 2 acceptor threads (kafka.network.SocketServer) [2019-02-16 19:28:39,531] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2019-02-16 19:28:39,536] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2019-02-16 19:28:39,550] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2019-02-16 19:28:39,667] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) [2019-02-16 19:28:39,891] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) [2019-02-16 19:28:39,898] INFO Result of znode creation at /brokers/ids/1 is: OK (kafka.zk.KafkaZkClient) [2019-02-16 19:28:39,908] INFO Registered broker 1 at path /brokers/ids/1 with addresses: ArrayBuffer(EndPoint(kafka1,19092,ListenerName(LISTENER_DOCKER_INTERNAL),PLAINTEXT), EndPoint(192.168.99.100,9092,ListenerName(LISTENER_DOCKER_EXTERNAL),PLAINTEXT)) (kafka.zk.KafkaZkClient) [2019-02-16 19:28:39,921] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) [2019-02-16 19:28:40,392] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) [2019-02-16 19:28:40,410] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2019-02-16 19:28:40,447] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2019-02-16 19:28:40,452] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2019-02-16 19:28:40,471] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) [2019-02-16 19:28:40,541] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) [2019-02-16 19:28:40,542] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) [2019-02-16 19:28:40,564] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) [2019-02-16 19:28:40,578] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) [2019-02-16 19:28:40,579] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) [2019-02-16 19:28:40,582] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) [2019-02-16 19:28:40,585] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) [2019-02-16 19:28:40,621] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 39 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [2019-02-16 19:28:40,681] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager) [2019-02-16 19:28:40,822] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) [2019-02-16 19:28:40,856] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) [2019-02-16 19:28:40,856] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) [2019-02-16 19:28:40,928] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) [2019-02-16 19:28:40,931] INFO [Controller id=1] Partitions being reassigned: Map() (kafka.controller.KafkaController) [2019-02-16 19:28:40,936] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) [2019-02-16 19:28:40,936] INFO [Controller id=1] Currently shutting brokers in the cluster: Set() (kafka.controller.KafkaController) [2019-02-16 19:28:40,936] INFO [Controller id=1] Current list of topics in the cluster: Set() (kafka.controller.KafkaController) [2019-02-16 19:28:40,937] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) [2019-02-16 19:28:40,951] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) [2019-02-16 19:28:40,954] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) [2019-02-16 19:28:40,956] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) [2019-02-16 19:28:40,957] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) [2019-02-16 19:28:40,993] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ReplicaStateMachine) [2019-02-16 19:28:41,001] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ReplicaStateMachine) [2019-02-16 19:28:41,012] INFO [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> Map() (kafka.controller.ReplicaStateMachine) [2019-02-16 19:28:41,021] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka1:19092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) [2019-02-16 19:28:41,022] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.PartitionStateMachine) [2019-02-16 19:28:41,035] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.PartitionStateMachine) [2019-02-16 19:28:41,039] INFO [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> Map() (kafka.controller.PartitionStateMachine) [2019-02-16 19:28:41,044] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) [2019-02-16 19:28:41,047] INFO [Controller id=1] Removing partitions Set() from the list of reassigned partitions in zookeeper (kafka.controller.KafkaController) [2019-02-16 19:28:41,061] INFO [Controller id=1] No more partitions need to be reassigned. Deleting zk path /admin/reassign_partitions (kafka.controller.KafkaController) [2019-02-16 19:28:41,054] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) [2019-02-16 19:28:41,096] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) [2019-02-16 19:28:41,102] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) [2019-02-16 19:28:41,102] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) [2019-02-16 19:28:41,102] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) [2019-02-16 19:28:41,102] INFO [Controller id=1] Starting preferred replica leader election for partitions (kafka.controller.KafkaController) [2019-02-16 19:28:41,110] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) [2019-02-16 19:28:41,123] INFO [SocketServer brokerId=1] Started processors for 2 acceptors (kafka.network.SocketServer) [2019-02-16 19:28:41,126] INFO Kafka version : 2.1.0-cp1 (org.apache.kafka.common.utils.AppInfoParser) [2019-02-16 19:28:41,137] INFO Kafka commitId : bda8715f42a1a3db (org.apache.kafka.common.utils.AppInfoParser) [2019-02-16 19:28:41,163] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) [2019-02-16 19:28:41,183] INFO Waiting until monitored service is ready for metrics collection (io.confluent.support.metrics.BaseMetricsReporter) [2019-02-16 19:28:41,196] INFO Monitored service is now ready (io.confluent.support.metrics.BaseMetricsReporter) [2019-02-16 19:28:41,201] INFO Attempting to collect and submit metrics (io.confluent.support.metrics.BaseMetricsReporter) [2019-02-16 19:28:42,183] WARN The replication factor of topic __confluent.support.metrics will be set to 1, which is less than the desired replication factor of 3 (reason: this cluster contains only 1 brokers). If you happen to add more brokers to this cluster, then it is important to increase the replication factor of the topic to eventually 3 to ensure reliable and durable metrics collection. (io.confluent.support.metrics.common.kafka.KafkaUtilities) [2019-02-16 19:28:42,193] INFO Attempting to create topic __confluent.support.metrics with 1 replicas, assuming 1 total brokers (io.confluent.support.metrics.common.kafka.KafkaUtilities) [2019-02-16 19:28:42,359] INFO Topic creation Map(__confluent.support.metrics-0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) [2019-02-16 19:28:42,448] INFO [Controller id=1] New topics: [Set(__confluent.support.metrics)], deleted topics: [Set()], new partition replica assignment [Map(__confluent.support.metrics-0 -> Vector(1))] (kafka.controller.KafkaController) [2019-02-16 19:28:42,455] INFO [Controller id=1] New partition creation callback for __confluent.support.metrics-0 (kafka.controller.KafkaController) [2019-02-16 19:28:42,808] INFO ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [] buffer.memory = 33554432 client.dns.lookup = default client.id = compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 10000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2019-02-16 19:28:42,960] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-02-16 19:28:42,972] ERROR Could not submit metrics to Kafka topic __confluent.support.metrics: Failed to construct kafka producer (io.confluent.support.metrics.BaseMetricsReporter) [2019-02-16 19:28:42,978] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(__confluent.support.metrics-0) (kafka.server.ReplicaFetcherManager) [2019-02-16 19:28:43,786] ERROR Error while creating log for __confluent.support.metrics-0 in dir /var/lib/kafka/data (kafka.server.LogDirFailureChannel) java.io.IOException: Invalid argument at sun.nio.ch.FileChannelImpl.map0(Native Method) at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:926) at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126) at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:54) at kafka.log.LogSegment$.open(LogSegment.scala:634) at kafka.log.Log.loadSegments(Log.scala:542) at kafka.log.Log.<init>(Log.scala:276) at kafka.log.Log$.apply(Log.scala:2071) at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:691) at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:659) at scala.Option.getOrElse(Option.scala:121) at kafka.log.LogManager.getOrCreateLog(LogManager.scala:659) at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:199) at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:195) at kafka.utils.Pool$$anon$2.apply(Pool.scala:61) at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660) at kafka.utils.Pool.getAndMaybePut(Pool.scala:60) at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:194) at kafka.cluster.Partition$$anonfun$5$$anonfun$7.apply(Partition.scala:373) at kafka.cluster.Partition$$anonfun$5$$anonfun$7.apply(Partition.scala:373) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at kafka.cluster.Partition$$anonfun$5.apply(Partition.scala:373) at kafka.cluster.Partition$$anonfun$5.apply(Partition.scala:367) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251) at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:259) at kafka.cluster.Partition.makeLeader(Partition.scala:367) at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:1162) at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:1160) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:130) at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1160) at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1072) at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:192) at kafka.server.KafkaApis.handle(KafkaApis.scala:117) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) at java.lang.Thread.run(Thread.java:748) [2019-02-16 19:28:43,852] INFO [ReplicaManager broker=1] Stopping serving replicas in dir /var/lib/kafka/data (kafka.server.ReplicaManager) [2019-02-16 19:28:43,857] ERROR [Broker id=1] Skipped the become-leader state change with correlation id 1 from controller 1 epoch 1 for partition __confluent.support.metrics-0 (last update controller epoch 1) since the replica for the partition is offline due to disk error org.apache.kafka.common.errors.KafkaStorageException: Error while creating log for __confluent.support.metrics-0 in dir /var/lib/kafka/data (state.change.logger) [2019-02-16 19:28:43,869] ERROR [ReplicaManager broker=1] Error while making broker the leader for partition Topic: __confluent.support.metrics; Partition: 0; Leader: None; AllReplicas: ; InSyncReplicas: in dir None (kafka.server.ReplicaManager) org.apache.kafka.common.errors.KafkaStorageException: Error while creating log for __confluent.support.metrics-0 in dir /var/lib/kafka/data Caused by: java.io.IOException: Invalid argument at sun.nio.ch.FileChannelImpl.map0(Native Method) at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:926) at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126) at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:54) at kafka.log.LogSegment$.open(LogSegment.scala:634) at kafka.log.Log.loadSegments(Log.scala:542) at kafka.log.Log.<init>(Log.scala:276) at kafka.log.Log$.apply(Log.scala:2071) at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:691) at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:659) at scala.Option.getOrElse(Option.scala:121) at kafka.log.LogManager.getOrCreateLog(LogManager.scala:659) at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:199) at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:195) at kafka.utils.Pool$$anon$2.apply(Pool.scala:61) at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660) at kafka.utils.Pool.getAndMaybePut(Pool.scala:60) at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:194) at kafka.cluster.Partition$$anonfun$5$$anonfun$7.apply(Partition.scala:373) at kafka.cluster.Partition$$anonfun$5$$anonfun$7.apply(Partition.scala:373) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at kafka.cluster.Partition$$anonfun$5.apply(Partition.scala:373) at kafka.cluster.Partition$$anonfun$5.apply(Partition.scala:367) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251) at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:259) at kafka.cluster.Partition.makeLeader(Partition.scala:367) at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:1162) at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:1160) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:130) at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1160) at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1072) at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:192) at kafka.server.KafkaApis.handle(KafkaApis.scala:117) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) at java.lang.Thread.run(Thread.java:748) [2019-02-16 19:28:43,955] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set() (kafka.server.ReplicaFetcherManager) [2019-02-16 19:28:43,976] INFO [ReplicaAlterLogDirsManager on broker 1] Removed fetcher for partitions Set() (kafka.server.ReplicaAlterLogDirsManager) [2019-02-16 19:28:43,992] INFO [Controller id=1] Mark replicas __confluent.support.metrics-0 on broker 1 as offline (state.change.logger) [2019-02-16 19:28:44,059] INFO [ReplicaManager broker=1] Broker 1 stopped fetcher for partitions and stopped moving logs for partitions because they are in the failed log directory /var/lib/kafka/data. (kafka.server.ReplicaManager) [2019-02-16 19:28:44,108] INFO Stopping serving logs in dir /var/lib/kafka/data (kafka.log.LogManager) [2019-02-16 19:28:44,126] ERROR Shutdown broker because all log dirs in /var/lib/kafka/data have failed (kafka.log.LogManager)
Leaving this here while I debug the issue in case someone else has run into this.
➜ kafka-stack-docker-compose git:(master) ✗ docker-machine --version docker-machine version 0.16.1, build cce350d7 ➜ kafka-stack-docker-compose git:(master) ✗ docker-compose --version docker-compose version 1.23.2, build 1110ad01 ➜ kafka-stack-docker-compose git:(master) ✗ docker --version Docker version 18.09.2, build 6247962
closing. idk what changed but rebooting the host solved the issue...
I git cloned the repo and ran
The stack runs but crashes. Looks like something to do with the logs not initializing correctly.
Leaving this here while I debug the issue in case someone else has run into this.