Closed lonyele closed 5 years ago
This is sometimes a bit complicated, because every kubernetes works a bit differently on different platforms. But lets try.
Can you run following commands and provide the output?
kubectl get nodes -o yaml
And can you either:
kubectl logs
and make sure that the log is from the start of the pod? On the beginning, it should contain the configuration of the broker with something similar to this:
# Listeners
listeners=REPLICATION://0.0.0.0:9091,CLIENT://0.0.0.0:9092,CLIENTTLS://0.0.0.0:9093,EXTERNAL://0.0.0.0:9094
advertised.listeners=REPLICATION://my-cluster-kafka-0.my-cluster-kafka-brokers.icdc-amqstreams-ga.svc.cluster.local:9091,CLIENT://my-cluster-kafka-0.my-cluster-kafka-brokers.icdc-amqstreams-ga.svc.cluster.local:9092,CLIENTTLS://my-cluster-kafka-0.my-cluster-kafka-brokers.icdc-amqstreams-ga.svc.cluster.local:9093,EXTERNAL://<someIP>:<somePort>
listener.security.protocol.map=REPLICATION:SSL,CLIENT:PLAINTEXT,CLIENTTLS:SSL,EXTERNAL:SSL
inter.broker.listener.name=REPLICATION
/tmp/strimzi.properties
.That might help us to move forward.
I could cleanse up some of it, but I left it for the sake of an entire log. btw I delete the namespace and reinstall everything that is why nodeport ips are different than the original question
This is a output ofkubectl get nodes -o yaml
$ kubectl get nodes -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: 2018-11-24T16:51:34Z
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/hostname: docker-for-desktop
node-role.kubernetes.io/master: ""
name: docker-for-desktop
namespace: ""
resourceVersion: "519821"
selfLink: /api/v1/nodes/docker-for-desktop
uid: 3345c11b-f009-11e8-907c-00155d2b1218
spec:
externalID: docker-for-desktop
status:
addresses:
- address: 192.168.65.3
type: InternalIP
- address: docker-for-desktop
type: Hostname
allocatable:
cpu: "3"
ephemeral-storage: "56829582857"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 8043448Ki
pods: "110"
capacity:
cpu: "3"
ephemeral-storage: 61664044Ki
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 8145848Ki
pods: "110"
conditions:
- lastHeartbeatTime: 2018-12-12T11:17:43Z
lastTransitionTime: 2018-12-10T07:19:04Z
message: kubelet has sufficient disk space available
reason: KubeletHasSufficientDisk
status: "False"
type: OutOfDisk
- lastHeartbeatTime: 2018-12-12T11:17:43Z
lastTransitionTime: 2018-12-10T07:19:04Z
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: 2018-12-12T11:17:43Z
lastTransitionTime: 2018-12-10T07:19:04Z
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: 2018-12-12T11:17:43Z
lastTransitionTime: 2018-11-24T16:51:21Z
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: 2018-12-12T11:17:43Z
lastTransitionTime: 2018-12-10T07:19:04Z
message: kubelet is posting ready status
reason: KubeletReady
status: "True"
type: Ready
daemonEndpoints:
kubeletEndpoint:
Port: 10250
images:
- names:
- lonyele/strimzi-kafka-producer@sha256:6b53dcaa5d7f26fd837267d53ddb4be7cc526dca4a4b8f70b324718c7ed3845e
- lonyele/strimzi-kafka-producer:latest
sizeBytes: 985388704
- names:
- lonyele/strimzi-kafka-producer@sha256:373fde09266189f43d405739114fe912598bfdd5fe27a90d4df55013122ebdf4
sizeBytes: 985387976
- names:
- lonyele/node-rdkafka-producer@sha256:9a13626dedeceb317680148df3ad4f267026869809bffbb2831b5f17a96afdad
- lonyele/node-rdkafka-producer:latest
sizeBytes: 985376910
- names:
- lonyele/node-rdkafka-producer@sha256:468748a6e67de609ec98be6706f2e96c855f0c305027ca563440ca82686004ab
sizeBytes: 985376821
- names:
- lonyele/node-rakafka-producer:latest
sizeBytes: 985376821
- names:
- lonyele/node-rdkafka-producer@sha256:390539cf0df8fdd6f340e923eef7f167a7c98ab3fb9493ae1a4b398f5c68e72b
sizeBytes: 985376786
- names:
- lonyele/test-kafka-producer@sha256:b94fec35b73a0f2d18a1701798bc2c92cfcee90722abef944f6d4bd89298ffd0
- lonyele/test-kafka-producer:latest
sizeBytes: 985376666
- names:
- lonyele/node-kafka-consumer@sha256:5e7d022e93eb1fb9a78d69982ee50ade4f68c20193d20a398c35dc54f0b71d6f
- lonyele/node-kafka-consumer:latest
sizeBytes: 985374543
- names:
- lonyele/node-kafka-consumer@sha256:0212192dd214f453651a09fcc3bda1b66718597f9922135a43b8f06f154e8dbd
sizeBytes: 985374540
- names:
- lonyele/node-kafka-consumer@sha256:f5d7afdd47b0442e2f5beb2d1518f618578c08eee9266f2af995744c854adcc4
sizeBytes: 985374422
- names:
- lonyele/node-kafka-consumer@sha256:ea82afdb4ff8a787bc3fdfb6538342bd0e7564e394a6329c23a810e2b1a0a73a
sizeBytes: 985374407
- names:
- lonyele/node-kafka-consumer@sha256:6440f58721f00d1e28576f52ef66a4f5525bba741d3f146b9845e064aee338b5
sizeBytes: 985374405
- names:
- lonyele/node-kafka-consumer@sha256:a0419f884349ec6e5639f4654e0766270e43861342ff305bfd2f0f1408b07658
sizeBytes: 985374391
- names:
- node@sha256:262a3b968df75a8867e72b483a63cc6b62ef63f7a251770dea9fcc37d31a9877
- node:10
sizeBytes: 893531240
- names:
- landoop/fast-data-dev@sha256:9acf55c4fa6146b34c9e4e428eaf3f240af57f875e24856360e84912cbd09aee
- landoop/fast-data-dev:latest
sizeBytes: 873491470
- names:
- nodefluent/kafka-rest@sha256:dae3a9dbfa4c271e49860c6e6b5c05499d7fed4f65df93f944bffe3af80756ca
- nodefluent/kafka-rest:latest
sizeBytes: 816144568
- names:
- 4daa61dac135c2a9f5abb54347a0e1e3:latest
- gcr.io/k8s-skaffold/node-example:f1b909ef-dirty-ff9613c
sizeBytes: 672663051
- names:
- gcr.io/k8s-skaffold/nodemon@sha256:4a61e2eff28efd94e4e7b3f7c95be712c7943ab6e1083bb0ef0d4e66177400bc
- gcr.io/k8s-skaffold/nodemon:latest
sizeBytes: 672662683
- names:
- lonyele/test-nodejs:latest
sizeBytes: 616483070
- names:
- gcr.io/google_appengine/nodejs@sha256:483f3ac46c4814ab524c42c9fd0efdfbdb81f848e14a1c06ee6b74aabb30021d
- gcr.io/google_appengine/nodejs:latest
sizeBytes: 479806052
- names:
- wurstmeister/zookeeper@sha256:2d71f9e0cb3440d552ee31769a00eb813be87397bf535995aca0bd4eadc151fc
- wurstmeister/zookeeper:latest
sizeBytes: 478344977
- names:
- nodefluent/kafka-rest-ui@sha256:8a18fa6ef3ce54a367aee3fb160918fce16efc0c1430b712d39cd52d158e79d3
- nodefluent/kafka-rest-ui:latest
sizeBytes: 433193457
- names:
- bitnami/kafka@sha256:90d59bd36780103bc25b1d2f6ce0d897a09ffc8cddd7f88d11cad0b50fd3f70c
sizeBytes: 429605454
- names:
- bitnami/kafka@sha256:40b14145e09c76cbca699b7b5352e5506a2a522019157397ad6189f9e31d8e90
- bitnami/kafka:2.1.0
sizeBytes: 429605454
- names:
- bitnami/kafka@sha256:5d55754034616ba1cabfe812e82688c212769557769b97861a8c975f9ec704e3
sizeBytes: 429605454
- names:
- bitnami/kafka@sha256:5c06cc69d7b34b3cfea2171d2c2986d0cdb2b49628f0cdf2c3142c14330f1179
sizeBytes: 429553536
- names:
- bitnami/kafka@sha256:d1a4481826f6e7eaed37035643da5c88e43e5828559fa8856df4b5e06d88fb56
sizeBytes: 429553445
- names:
- bitnami/kafka@sha256:390880ce934cabae7d9bb7c1e724cf7ee38363549346a9393c24a56f35c28d05
- bitnami/kafka:2.0.1
sizeBytes: 423267771
- names:
- bitnami/zookeeper@sha256:fd8c4ff3f7999fddaf78247d80349cd990e1bd0d19b1952710a36c42cb6701e3
sizeBytes: 422574199
- names:
- bitnami/zookeeper@sha256:b85c4a82152c7181f3822c891b36f74cf548bc36d81e63e51c3795c704c43ccf
- bitnami/zookeeper:3.4.12-debian-9
sizeBytes: 422574199
- names:
- bitnami/zookeeper@sha256:70ac693a33da666808f6011b665c660cd90b4da21e625e44891534207a552c90
sizeBytes: 422522281
- names:
- bitnami/zookeeper@sha256:36ebf9249cc38c3fca46bf15528128dea34cd3992a237925093769b94255f46e
sizeBytes: 422522190
- names:
- bitnami/zookeeper@sha256:005f020c3853fa13d1934e12819caf595e0b4deae2bdae2edea3a512ddf4e5f5
sizeBytes: 422522190
- names:
- strimzi/kafka@sha256:b57783a946353f1a3328f5ab2ece3a0af61426ae9396d8a221a4fff650d2eeb4
- strimzi/kafka:latest
sizeBytes: 417050411
- names:
- strimzi/kafka@sha256:7d2c692a198a2f0a27eba471db6a9bdd4ec3f8f68157edf3ecec633b86c93593
- strimzi/kafka:0.8.2
sizeBytes: 417007383
- names:
- strimzi/zookeeper@sha256:671359d4cda9829da3a05f4618503b6489ade9daad15ad84bcd4d87feb9784ff
- strimzi/zookeeper:0.8.2
sizeBytes: 416999586
- names:
- strimzi/kafka@sha256:91e7e2efc768540a9be2c98fd912eb80b287cfca8b5eb8daf19dca7caf60065c
- strimzi/kafka:0.8.1
sizeBytes: 416917327
- names:
- strimzi/user-operator@sha256:a1779e16f55a647684607e552430bc231bd7cd63d1c02b059fc29e02331d4baa
- strimzi/user-operator:0.8.2
sizeBytes: 395305800
- names:
- strimzi/topic-operator@sha256:7a462aab2dd119f86909c50dbf2415c3f8b4070c841646de548ba026e6386def
- strimzi/topic-operator:0.8.2
sizeBytes: 382930476
- names:
- <none>@<none>
- <none>:<none>
sizeBytes: 377645817
- names:
- <none>@<none>
- <none>:<none>
sizeBytes: 377645781
- names:
- <none>@<none>
- <none>:<none>
sizeBytes: 377645760
- names:
- strimzi/cluster-operator@sha256:da4929bbb22c166f506aaff5ab8bff0a1e2f4ba861ba088e822ec31cb16d2b70
- strimzi/cluster-operator:0.8.2
sizeBytes: 377233427
- names:
- golang@sha256:356aea725be911d52e0f2f0344a17ac3d97c54c74d50b8561f58eae6cc0871bf
- golang:1.10.1-alpine3.7
sizeBytes: 375630164
- names:
- strimzi/kafka-init@sha256:9d54be433e78f71b7f365daee9c5319e21284a4b3c4ae6afe0193b27c15c3ba2
- strimzi/kafka-init:0.8.2
sizeBytes: 371840160
- names:
- strimzi/hello-world-consumer@sha256:dc5d48079831d38abc938d7d7eaaf45292eff0e2da0338a21dce794be6604c76
- strimzi/hello-world-consumer:latest
sizeBytes: 363839208
- names:
- strimzi/hello-world-producer@sha256:24ea8f644a7870118ec8e8c9923d1dda07448bd1db39c7aeb52526d9a9622421
- strimzi/hello-world-producer:latest
sizeBytes: 363838844
- names:
- solsson/kafka@sha256:7fdb326994bcde133c777d888d06863b7c1a0e80f043582816715d76643ab789
sizeBytes: 274221668
- names:
- wurstmeister/kafka@sha256:62d676b64e77f9ca63276fac126d56b68125701143c578941638159fc5b9319e
- wurstmeister/kafka:0.10.2.1
sizeBytes: 264953403
- names:
- k8s.gcr.io/kube-apiserver-amd64@sha256:a6c4b6b2429d0a15d30a546226e01b1164118e022ad40f3ece2f95126f1580f5
- k8s.gcr.io/kube-apiserver-amd64:v1.10.3
sizeBytes: 225089833
nodeInfo:
architecture: amd64
bootID: 30abe3b6-3067-4c12-9af7-8546f9bfc846
containerRuntimeVersion: docker://18.9.0
kernelVersion: 4.9.125-linuxkit
kubeProxyVersion: v1.10.3
kubeletVersion: v1.10.3
machineID: ""
operatingSystem: linux
osImage: Docker for Windows
systemUUID: 0BA3E355-F1B2-44F2-83D3-0D3E650A823D
kind: List
metadata:
resourceVersion: ""
selfLink: ""
This is a log from one of kafka broker's pod
$ kubectl -n kafka logs my-cluster-kafka-0 -c kafka
KAFKA_BROKER_ID=0
KAFKA_LOG_DIRS=/var/lib/kafka/kafka-log0
Preparing truststore for replication listener
Adding /opt/kafka/cluster-ca-certs/ca.crt to truststore /tmp/kafka/cluster.truststore.p12 with alias ca
Certificate was added to keystore
Preparing truststore for replication listener is complete
Preparing keystore for replication and clienttls listener
Preparing keystore for replication and clienttls listener is complete
Preparing truststore for clienttls listener
Adding /opt/kafka/client-ca-certs/ca.crt to truststore /tmp/kafka/clients.truststore.p12 with alias ca
Certificate was added to keystore
Preparing truststore for clienttls listener is complete
Starting Kafka with configuration:
broker.id=0
broker.rack=
# Listeners
listeners=REPLICATION://0.0.0.0:9091,CLIENT://0.0.0.0:9092,CLIENTTLS://0.0.0.0:9093,EXTERNAL://0.0.0.0:9094
advertised.listeners=REPLICATION://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9091,CLIENT://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9092,CLIENTTLS://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9093,EXTERNAL://192.168.65.3:31227
listener.security.protocol.map=REPLICATION:SSL,CLIENT:PLAINTEXT,CLIENTTLS:SSL,EXTERNAL:PLAINTEXT
inter.broker.listener.name=REPLICATION
# Zookeeper
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
# Logs
log.dirs=/var/lib/kafka/kafka-log0
# TLS / SSL
ssl.keystore.password=Zu1QA4PBlUHoHlBrJS0D4jyfHIawjF9N
ssl.truststore.password=Zu1QA4PBlUHoHlBrJS0D4jyfHIawjF9N
ssl.keystore.type=PKCS12
ssl.truststore.type=PKCS12
ssl.endpoint.identification.algorithm=HTTPS
ssl.secure.random.implementation=SHA1PRNG
listener.name.replication.ssl.keystore.location=/tmp/kafka/cluster.keystore.p12
listener.name.replication.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
listener.name.replication.ssl.client.auth=required
sasl.enabled.mechanisms=
# TLS interface configuration
listener.name.clienttls.ssl.keystore.location=/tmp/kafka/cluster.keystore.p12
listener.name.clienttls.ssl.truststore.location=/tmp/kafka/clients.truststore.p12
# CLIENTTLS listener authentication
listener.name.clienttls.ssl.client.auth=none
# Authorization configuration
authorizer.class.name=
# Provided configuration
transaction.state.log.replication.factor=3
offsets.topic.replication.factor=3
transaction.state.log.min.isr=2
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
2018-12-12T10:55:28.743+0000: 0.411: [GC pause (G1 Evacuation Pause) (young), 0.0063991 secs]
[Parallel Time: 5.6 ms, GC Workers: 1]
[GC Worker Start (ms): 411.3]
[Ext Root Scanning (ms): 1.1]
[Update RS (ms): 0.0]
[Processed Buffers: 0]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 4.2]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 5.5]
[GC Worker End (ms): 416.7]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 0.7 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.3 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 6144.0K(6144.0K)->0.0B(5120.0K) Survivors: 0.0B->1024.0K Heap: 6144.0K(128.0M)->2151.5K(128.0M)]
[Times: user=0.01 sys=0.00, real=0.00 secs]
2018-12-12 10:55:28,885 INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [main]
2018-12-12T10:55:28.887+0000: 0.555: [GC pause (G1 Evacuation Pause) (young), 0.0056561 secs]
[Parallel Time: 4.8 ms, GC Workers: 1]
[GC Worker Start (ms): 555.5]
[Ext Root Scanning (ms): 0.9]
[Update RS (ms): 0.1]
[Processed Buffers: 4]
[Scan RS (ms): 1.2]
[Code Root Scanning (ms): 0.2]
[Object Copy (ms): 2.4]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 4.8]
[GC Worker End (ms): 560.2]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 0.7 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.3 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.0 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 5120.0K(5120.0K)->0.0B(5120.0K) Survivors: 1024.0K->1024.0K Heap: 7271.5K(128.0M)->4096.0K(128.0M)]
[Times: user=0.00 sys=0.00, real=0.01 secs]
2018-12-12T10:55:29.009+0000: 0.677: [GC pause (G1 Evacuation Pause) (young), 0.0030735 secs]
[Parallel Time: 2.3 ms, GC Workers: 1]
[GC Worker Start (ms): 676.8]
[Ext Root Scanning (ms): 0.8]
[Update RS (ms): 0.2]
[Processed Buffers: 7]
[Scan RS (ms): 0.1]
[Code Root Scanning (ms): 0.0]
[Object Copy (ms): 1.1]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 2.2]
[GC Worker End (ms): 679.0]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.0 ms]
[Other: 0.8 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.6 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.0 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 5120.0K(5120.0K)->0.0B(5120.0K) Survivors: 1024.0K->1024.0K Heap: 9216.0K(128.0M)->5120.0K(128.0M)]
[Times: user=0.01 sys=0.00, real=0.01 secs]
2018-12-12T10:55:29.142+0000: 0.809: [GC pause (G1 Evacuation Pause) (young), 0.0028066 secs]
[Parallel Time: 2.1 ms, GC Workers: 1]
[GC Worker Start (ms): 809.5]
[Ext Root Scanning (ms): 0.6]
[Update RS (ms): 0.3]
[Processed Buffers: 6]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 1.1]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 2.1]
[GC Worker End (ms): 811.6]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 0.6 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.4 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 5120.0K(5120.0K)->0.0B(5120.0K) Survivors: 1024.0K->1024.0K Heap: 10.0M(128.0M)->6144.0K(128.0M)]
[Times: user=0.00 sys=0.00, real=0.00 secs]
2018-12-12T10:55:29.284+0000: 0.952: [GC pause (G1 Evacuation Pause) (young), 0.0031785 secs]
[Parallel Time: 2.6 ms, GC Workers: 1]
[GC Worker Start (ms): 951.8]
[Ext Root Scanning (ms): 1.2]
[Update RS (ms): 0.6]
[Processed Buffers: 7]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.0]
[Object Copy (ms): 0.7]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 2.6]
[GC Worker End (ms): 954.4]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 0.5 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.3 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 5120.0K(5120.0K)->0.0B(5120.0K) Survivors: 1024.0K->1024.0K Heap: 11.0M(128.0M)->6656.0K(128.0M)]
[Times: user=0.00 sys=0.00, real=0.00 secs]
2018-12-12T10:55:29.384+0000: 1.052: [GC pause (G1 Evacuation Pause) (young), 0.0028180 secs]
[Parallel Time: 2.1 ms, GC Workers: 1]
[GC Worker Start (ms): 1052.0]
[Ext Root Scanning (ms): 0.7]
[Update RS (ms): 0.2]
[Processed Buffers: 6]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 0.9]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 2.0]
[GC Worker End (ms): 1054.0]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 0.7 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.5 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 5120.0K(5120.0K)->0.0B(5120.0K) Survivors: 1024.0K->1024.0K Heap: 11.5M(128.0M)->7623.5K(128.0M)]
[Times: user=0.00 sys=0.00, real=0.00 secs]
2018-12-12 10:55:29,503 INFO starting (kafka.server.KafkaServer) [main]
2018-12-12 10:55:29,504 INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer) [main]
2018-12-12T10:55:29.512+0000: 1.179: [GC pause (G1 Evacuation Pause) (young), 0.0037071 secs]
[Parallel Time: 3.1 ms, GC Workers: 1]
[GC Worker Start (ms): 1179.6]
[Ext Root Scanning (ms): 0.9]
[Update RS (ms): 0.4]
[Processed Buffers: 6]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 1.5]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 2.9]
[GC Worker End (ms): 1182.5]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.2 ms]
[Other: 0.4 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.2 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 5120.0K(5120.0K)->0.0B(5120.0K) Survivors: 1024.0K->1024.0K Heap: 12.4M(128.0M)->8135.5K(128.0M)]
[Times: user=0.00 sys=0.00, real=0.00 secs]
2018-12-12 10:55:29,532 INFO [ZooKeeperClient] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient) [main]
2018-12-12 10:55:29,536 INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,536 INFO Client environment:host.name=my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,536 INFO Client environment:java.version=1.8.0_191 (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,537 INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,537 INFO Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64/jre (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,537 INFO Client environment:java.class.path=/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/connect-api-2.0.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.0.0.jar:/opt/kafka/bin/../libs/connect-file-2.0.0.jar:/opt/kafka/bin/../libs/connect-json-2.0.0.jar:/opt/kafka/bin/../libs/connect-runtime-2.0.0.jar:/opt/kafka/bin/../libs/connect-transforms-2.0.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b42.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b42.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b42.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.6.jar:/opt/kafka/bin/../libs/jackson-core-2.9.6.jar:/opt/kafka/bin/../libs/jackson-databind-2.9.6.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.9.6.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.6.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.6.jar:/opt/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b42.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.27.jar:/opt/kafka/bin/../libs/jersey-common-2.27.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.27.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.27.jar:/opt/kafka/bin/../libs/jersey-hk2-2.27.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.27.jar:/opt/kafka/bin/../libs/jersey-server-2.27.jar:/opt/kafka/bin/../libs/jetty-client-9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-http-9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-io-9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-security-9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-server-9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-util-9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-2.0.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-2.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.12-2.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.0.0.jar:/opt/kafka/bin/../libs/kafka-tools-2.0.0.jar:/opt/kafka/bin/../libs/kafka_2.12-2.0.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-2.0.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.4.1.jar:/opt/kafka/bin/../libs/maven-artifact-3.5.3.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.1.0.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.7.3.jar:/opt/kafka/bin/../libs/scala-library-2.12.6.jar:/opt/kafka/bin/../libs/scala-logging_2.12-3.9.0.jar:/opt/kafka/bin/../libs/scala-reflect-2.12.6.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.1.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.13.jar (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,537 INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,537 INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,537 INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,537 INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,537 INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,537 INFO Client environment:os.version=4.9.125-linuxkit (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,537 INFO Client environment:user.name=kafka (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,537 INFO Client environment:user.home=/home/kafka (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,538 INFO Client environment:user.dir=/opt/kafka (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,539 INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@433d61fb (org.apache.zookeeper.ZooKeeper) [main]
2018-12-12 10:55:29,551 INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) [main]
2018-12-12 10:55:29,551 INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [main-SendThread(localhost:2181)]
2018-12-12 10:55:29,557 INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn) [main-SendThread(localhost:2181)]
2018-12-12 10:55:30,167 INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x200055bf3660000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn) [main-SendThread(localhost:2181)]
2018-12-12 10:55:30,184 INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient) [main]
2018-12-12T10:55:30.272+0000: 1.940: [GC pause (G1 Evacuation Pause) (young), 0.0128789 secs]
[Parallel Time: 9.3 ms, GC Workers: 1]
[GC Worker Start (ms): 1939.9]
[Ext Root Scanning (ms): 1.7]
[Update RS (ms): 5.8]
[Processed Buffers: 9]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.2]
[Object Copy (ms): 1.5]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 9.2]
[GC Worker End (ms): 1949.1]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 2.1 ms]
[Other: 1.4 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 1.1 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 5120.0K(5120.0K)->0.0B(5120.0K) Survivors: 1024.0K->1024.0K Heap: 12.9M(128.0M)->8653.5K(128.0M)]
[Times: user=0.01 sys=0.00, real=0.01 secs]
2018-12-12T10:55:31.192+0000: 2.860: [GC pause (G1 Evacuation Pause) (young), 0.0046197 secs]
[Parallel Time: 3.4 ms, GC Workers: 1]
[GC Worker Start (ms): 2860.7]
[Ext Root Scanning (ms): 1.3]
[Update RS (ms): 0.5]
[Processed Buffers: 9]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 1.5]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 3.4]
[GC Worker End (ms): 2864.1]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.0 ms]
[Other: 1.2 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.5 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.0 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 5120.0K(5120.0K)->0.0B(9216.0K) Survivors: 1024.0K->1024.0K Heap: 13.5M(128.0M)->10.4M(128.0M)]
[Times: user=0.01 sys=0.00, real=0.00 secs]
2018-12-12T10:55:31.327+0000: 2.995: [GC pause (G1 Evacuation Pause) (young), 0.0138971 secs]
[Parallel Time: 5.0 ms, GC Workers: 1]
[GC Worker Start (ms): 2995.2]
[Ext Root Scanning (ms): 1.0]
[Update RS (ms): 1.6]
[Processed Buffers: 7]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 2.2]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 5.0]
[GC Worker End (ms): 3000.2]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.0 ms]
[Other: 8.9 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 8.3 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.0 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 9216.0K(9216.0K)->0.0B(4096.0K) Survivors: 1024.0K->2048.0K Heap: 19.4M(128.0M)->13.0M(128.0M)]
[Times: user=0.00 sys=0.00, real=0.02 secs]
2018-12-12T10:55:31.374+0000: 3.042: [GC pause (Metadata GC Threshold) (young) (initial-mark), 0.0202941 secs]
[Parallel Time: 20.0 ms, GC Workers: 1]
[GC Worker Start (ms): 3042.1]
[Ext Root Scanning (ms): 16.6]
[Update RS (ms): 0.5]
[Processed Buffers: 6]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 2.7]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 19.9]
[GC Worker End (ms): 3062.0]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.0 ms]
[Other: 0.3 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.1 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.0 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 1024.0K(4096.0K)->0.0B(5120.0K) Survivors: 2048.0K->1024.0K Heap: 13.9M(128.0M)->13.5M(128.0M)]
[Times: user=0.00 sys=0.00, real=0.02 secs]
2018-12-12T10:55:31.395+0000: 3.062: [GC concurrent-root-region-scan-start]
2018-12-12T10:55:31.512+0000: 3.180: [GC concurrent-root-region-scan-end, 0.1177488 secs]
2018-12-12T10:55:31.512+0000: 3.180: [GC concurrent-mark-start]
2018-12-12T10:55:31.522+0000: 3.190: [GC concurrent-mark-end, 0.0096387 secs]
2018-12-12T10:55:31.522+0000: 3.190: [GC remark 2018-12-12T10:55:31.522+0000: 3.190: [Finalize Marking, 0.0003021 secs] 2018-12-12T10:55:31.523+0000: 3.190: [GC ref-proc, 0.0002227 secs] 2018-12-12T10:55:31.523+0000: 3.191: [Unloading, 0.0053203 secs], 0.0060244 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
2018-12-12T10:55:31.528+0000: 3.196: [GC cleanup 16M->13M(128M), 0.0005018 secs]
[Times: user=0.00 sys=0.00, real=0.00 secs]
2018-12-12T10:55:31.529+0000: 3.197: [GC concurrent-cleanup-start]
2018-12-12T10:55:31.529+0000: 3.197: [GC concurrent-cleanup-end, 0.0000251 secs]
2018-12-12 10:55:31,601 INFO Cluster ID = 3zE9rl3NQ-yGtReLJMuPag (kafka.server.KafkaServer) [main]
2018-12-12 10:55:31,628 WARN No meta.properties file under dir /var/lib/kafka/kafka-log0/meta.properties (kafka.server.BrokerMetadataCheckpoint) [main]
2018-12-12T10:55:31.737+0000: 3.405: [GC pause (G1 Evacuation Pause) (young), 0.0052972 secs]
[Parallel Time: 4.5 ms, GC Workers: 1]
[GC Worker Start (ms): 3404.9]
[Ext Root Scanning (ms): 1.5]
[Update RS (ms): 1.9]
[Processed Buffers: 11]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.0]
[Object Copy (ms): 1.0]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 4.4]
[GC Worker End (ms): 3409.3]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 0.8 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.2 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 5120.0K(5120.0K)->0.0B(15.0M) Survivors: 1024.0K->1024.0K Heap: 15.5M(128.0M)->11.5M(128.0M)]
[Times: user=0.01 sys=0.00, real=0.01 secs]
2018-12-12 10:55:31,788 INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = REPLICATION://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9091,CLIENT://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9092,CLIENTTLS://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9093,EXTERNAL://192.168.65.3:31227
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack =
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = REPLICATION
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = REPLICATION:SSL,CLIENT:PLAINTEXT,CLIENTTLS:SSL,EXTERNAL:PLAINTEXT
listeners = REPLICATION://0.0.0.0:9091,CLIENT://0.0.0.0:9092,CLIENTTLS://0.0.0.0:9093,EXTERNAL://0.0.0.0:9094
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/lib/kafka/kafka-log0
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = []
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = HTTPS
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = [hidden]
ssl.keystore.type = PKCS12
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = SHA1PRNG
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = [hidden]
ssl.truststore.type = PKCS12
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig) [main]
2018-12-12 10:55:31,847 INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = REPLICATION://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9091,CLIENT://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9092,CLIENTTLS://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9093,EXTERNAL://192.168.65.3:31227
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack =
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = REPLICATION
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = REPLICATION:SSL,CLIENT:PLAINTEXT,CLIENTTLS:SSL,EXTERNAL:PLAINTEXT
listeners = REPLICATION://0.0.0.0:9091,CLIENT://0.0.0.0:9092,CLIENTTLS://0.0.0.0:9093,EXTERNAL://0.0.0.0:9094
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/lib/kafka/kafka-log0
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = []
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = HTTPS
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = [hidden]
ssl.keystore.type = PKCS12
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = SHA1PRNG
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = [hidden]
ssl.truststore.type = PKCS12
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig) [main]
2018-12-12 10:55:31,945 INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [ThrottledChannelReaper-Fetch]
2018-12-12 10:55:31,946 INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [ThrottledChannelReaper-Produce]
2018-12-12 10:55:31,948 INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [ThrottledChannelReaper-Request]
2018-12-12 10:55:31,977 INFO Log directory /var/lib/kafka/kafka-log0 not found, creating it. (kafka.log.LogManager) [main]
2018-12-12 10:55:32,003 INFO Loading logs. (kafka.log.LogManager) [main]
2018-12-12 10:55:32,022 INFO Logs loading complete in 19 ms. (kafka.log.LogManager) [main]
2018-12-12 10:55:32,039 INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) [main]
2018-12-12 10:55:32,045 INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) [main]
2018-12-12 10:55:32,081 INFO Starting the log cleaner (kafka.log.LogCleaner) [main]
2018-12-12T10:55:32.108+0000: 3.775: [GC pause (G1 Humongous Allocation) (young) (initial-mark), 0.0152001 secs]
[Parallel Time: 14.4 ms, GC Workers: 1]
[GC Worker Start (ms): 3785.3]
[Ext Root Scanning (ms): 1.3]
[Update RS (ms): 1.3]
[Processed Buffers: 10]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 2.1]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 4.8]
[GC Worker End (ms): 3790.0]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.0 ms]
[Other: 0.7 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.3 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.0 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 8192.0K(15.0M)->0.0B(5120.0K) Survivors: 1024.0K->2048.0K Heap: 19.5M(128.0M)->12.5M(128.0M)]
[Times: user=0.02 sys=0.00, real=0.02 secs]
2018-12-12T10:55:32.130+0000: 3.798: [GC concurrent-root-region-scan-start]
2018-12-12T10:55:32.132+0000: 3.799: [GC concurrent-root-region-scan-end, 0.0016010 secs]
2018-12-12T10:55:32.132+0000: 3.799: [GC concurrent-mark-start]
2018-12-12T10:55:32.140+0000: 3.808: [GC concurrent-mark-end, 0.0082671 secs]
2018-12-12T10:55:32.200+0000: 3.868: [GC remark 2018-12-12T10:55:32.200+0000: 3.868: [Finalize Marking, 0.0002128 secs] 2018-12-12T10:55:32.200+0000: 3.868: [GC ref-proc, 0.0008484 secs] 2018-12-12T10:55:32.201+0000: 3.869: [Unloading, 0.0123621 secs], 0.0140533 secs]
[Times: user=0.00 sys=0.00, real=0.01 secs]
2018-12-12T10:55:32.215+0000: 3.882: [GC cleanup 140M->138M(257M), 0.0010543 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
2018-12-12T10:55:32.218+0000: 3.885: [GC concurrent-cleanup-start]
2018-12-12T10:55:32.218+0000: 3.886: [GC concurrent-cleanup-end, 0.0003613 secs]
2018-12-12 10:55:32,221 INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) [kafka-log-cleaner-thread-0]
2018-12-12T10:55:32.251+0000: 3.919: [GC pause (G1 Evacuation Pause) (young), 0.0493805 secs]
[Parallel Time: 41.4 ms, GC Workers: 1]
[GC Worker Start (ms): 3927.6]
[Ext Root Scanning (ms): 1.2]
[Update RS (ms): 0.6]
[Processed Buffers: 7]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 30.6]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 32.5]
[GC Worker End (ms): 3960.1]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 8.0 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 7.7 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 5120.0K(5120.0K)->0.0B(11.0M) Survivors: 2048.0K->1024.0K Heap: 143.0M(257.0M)->138.0M(257.0M)]
[Times: user=0.01 sys=0.00, real=0.05 secs]
2018-12-12T10:55:32.655+0000: 4.323: [GC pause (G1 Evacuation Pause) (young), 0.0102907 secs]
[Parallel Time: 9.3 ms, GC Workers: 1]
[GC Worker Start (ms): 4323.2]
[Ext Root Scanning (ms): 1.6]
[Update RS (ms): 1.5]
[Processed Buffers: 9]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 5.7]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 8.9]
[GC Worker End (ms): 4332.1]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 1.0 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.3 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.2 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 11.0M(11.0M)->0.0B(10.0M) Survivors: 1024.0K->2048.0K Heap: 149.0M(257.0M)->141.0M(257.0M)]
[Times: user=0.00 sys=0.00, real=0.01 secs]
2018-12-12 10:55:32,678 INFO Awaiting socket connections on 0.0.0.0:9091. (kafka.network.Acceptor) [main]
2018-12-12T10:55:32.969+0000: 4.637: [GC pause (G1 Evacuation Pause) (young) (initial-mark), 0.0305433 secs]
[Parallel Time: 24.7 ms, GC Workers: 1]
[GC Worker Start (ms): 4637.2]
[Ext Root Scanning (ms): 7.0]
[Update RS (ms): 2.4]
[Processed Buffers: 8]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 1.6]
[Object Copy (ms): 13.4]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 24.5]
[GC Worker End (ms): 4661.7]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.0 ms]
[Other: 5.8 ms]
[Choose CSet: 0.1 ms]
[Ref Proc: 5.5 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.0 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 10.0M(10.0M)->0.0B(10.0M) Survivors: 2048.0K->2048.0K Heap: 151.0M(257.0M)->142.4M(257.0M)]
[Times: user=0.02 sys=0.00, real=0.04 secs]
2018-12-12T10:55:33.000+0000: 4.668: [GC concurrent-root-region-scan-start]
2018-12-12T10:55:33.002+0000: 4.669: [GC concurrent-root-region-scan-end, 0.0013884 secs]
2018-12-12T10:55:33.002+0000: 4.669: [GC concurrent-mark-start]
2018-12-12T10:55:33.028+0000: 4.696: [GC concurrent-mark-end, 0.0267907 secs]
2018-12-12T10:55:33.029+0000: 4.696: [GC remark 2018-12-12T10:55:33.029+0000: 4.697: [Finalize Marking, 0.0002299 secs] 2018-12-12T10:55:33.029+0000: 4.697: [GC ref-proc, 0.0002271 secs] 2018-12-12T10:55:33.029+0000: 4.697: [Unloading, 0.0089177 secs], 0.0111343 secs]
[Times: user=0.01 sys=0.00, real=0.01 secs]
2018-12-12T10:55:33.040+0000: 4.708: [GC cleanup 142M->141M(257M), 0.0007028 secs]
[Times: user=0.00 sys=0.00, real=0.00 secs]
2018-12-12T10:55:33.041+0000: 4.709: [GC concurrent-cleanup-start]
2018-12-12T10:55:33.041+0000: 4.709: [GC concurrent-cleanup-end, 0.0000539 secs]
2018-12-12T10:55:33.208+0000: 4.875: [GC pause (G1 Evacuation Pause) (young), 0.0059124 secs]
[Parallel Time: 5.5 ms, GC Workers: 1]
[GC Worker Start (ms): 4875.7]
[Ext Root Scanning (ms): 0.9]
[Update RS (ms): 3.3]
[Processed Buffers: 14]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.0]
[Object Copy (ms): 1.1]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 5.4]
[GC Worker End (ms): 4881.1]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.0 ms]
[Other: 0.4 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.1 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.0 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 10.0M(10.0M)->0.0B(10.0M) Survivors: 2048.0K->2048.0K Heap: 151.4M(257.0M)->141.4M(257.0M)]
[Times: user=0.01 sys=0.00, real=0.01 secs]
2018-12-12T10:55:33.351+0000: 5.018: [GC pause (G1 Evacuation Pause) (young), 0.0102982 secs]
[Parallel Time: 9.2 ms, GC Workers: 1]
[GC Worker Start (ms): 5018.6]
[Ext Root Scanning (ms): 0.9]
[Update RS (ms): 0.4]
[Processed Buffers: 6]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 7.7]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 9.1]
[GC Worker End (ms): 5027.6]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.2 ms]
[Other: 0.9 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.6 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.0 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.1 ms]
[Eden: 10.0M(10.0M)->0.0B(10.0M) Survivors: 2048.0K->2048.0K Heap: 151.4M(257.0M)->141.9M(257.0M)]
[Times: user=0.01 sys=0.00, real=0.01 secs]
2018-12-12 10:55:33,401 INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor) [main]
2018-12-12 10:55:33,419 INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.Acceptor) [main]
2018-12-12T10:55:33.513+0000: 5.181: [GC pause (G1 Evacuation Pause) (young) (initial-mark), 0.0123861 secs]
[Parallel Time: 11.9 ms, GC Workers: 1]
[GC Worker Start (ms): 5181.2]
[Ext Root Scanning (ms): 1.8]
[Update RS (ms): 0.7]
[Processed Buffers: 7]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.2]
[Object Copy (ms): 9.1]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 11.8]
[GC Worker End (ms): 5192.9]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.0 ms]
[Other: 0.5 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.1 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.0 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 10.0M(10.0M)->0.0B(10.0M) Survivors: 2048.0K->2048.0K Heap: 151.9M(257.0M)->142.0M(257.0M)]
[Times: user=0.00 sys=0.01, real=0.01 secs]
2018-12-12T10:55:33.529+0000: 5.196: [GC concurrent-root-region-scan-start]
2018-12-12T10:55:33.531+0000: 5.199: [GC concurrent-root-region-scan-end, 0.0025715 secs]
2018-12-12T10:55:33.531+0000: 5.199: [GC concurrent-mark-start]
2018-12-12T10:55:33.622+0000: 5.290: [GC concurrent-mark-end, 0.0911547 secs]
2018-12-12T10:55:33.646+0000: 5.314: [GC remark 2018-12-12T10:55:33.646+0000: 5.314: [Finalize Marking, 0.0036105 secs] 2018-12-12T10:55:33.650+0000: 5.318: [GC ref-proc, 0.0002959 secs] 2018-12-12T10:55:33.650+0000: 5.318: [Unloading, 0.0247312 secs], 0.0350564 secs]
[Times: user=0.01 sys=0.00, real=0.04 secs]
2018-12-12T10:55:33.685+0000: 5.353: [GC cleanup 143M->143M(257M), 0.0006512 secs]
[Times: user=0.00 sys=0.00, real=0.00 secs]
2018-12-12T10:55:33.856+0000: 5.524: [GC pause (G1 Evacuation Pause) (young), 0.0040207 secs]
[Parallel Time: 3.8 ms, GC Workers: 1]
[GC Worker Start (ms): 5524.3]
[Ext Root Scanning (ms): 1.3]
[Update RS (ms): 0.4]
[Processed Buffers: 7]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 1.9]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 3.7]
[GC Worker End (ms): 5528.0]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.0 ms]
[Other: 0.2 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.0 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.0 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 10.0M(10.0M)->0.0B(16.0M) Survivors: 2048.0K->2048.0K Heap: 152.0M(257.0M)->142.6M(257.0M)]
[Times: user=0.00 sys=0.00, real=0.01 secs]
2018-12-12 10:55:34,006 INFO Awaiting socket connections on 0.0.0.0:9094. (kafka.network.Acceptor) [main]
2018-12-12 10:55:34,031 INFO [SocketServer brokerId=0] Started 4 acceptor threads (kafka.network.SocketServer) [main]
2018-12-12 10:55:34,103 INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Produce]
2018-12-12 10:55:34,103 INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Fetch]
2018-12-12 10:55:34,107 INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-DeleteRecords]
2018-12-12 10:55:34,201 INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) [LogDirFailureHandler]
2018-12-12 10:55:34,295 INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient) [main]
2018-12-12 10:55:34,324 INFO Result of znode creation at /brokers/ids/0 is: OK (kafka.zk.KafkaZkClient) [main]
2018-12-12 10:55:34,326 INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local,9091,ListenerName(REPLICATION),SSL), EndPoint(my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local,9092,ListenerName(CLIENT),PLAINTEXT), EndPoint(my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local,9093,ListenerName(CLIENTTLS),SSL), EndPoint(192.168.65.3,31227,ListenerName(EXTERNAL),PLAINTEXT)) (kafka.zk.KafkaZkClient) [main]
2018-12-12 10:55:34,332 WARN No meta.properties file under dir /var/lib/kafka/kafka-log0/meta.properties (kafka.server.BrokerMetadataCheckpoint) [main]
2018-12-12T10:55:34.473+0000: 6.140: [GC pause (G1 Evacuation Pause) (young), 0.0156278 secs]
[Parallel Time: 14.2 ms, GC Workers: 1]
[GC Worker Start (ms): 6140.6]
[Ext Root Scanning (ms): 1.3]
[Update RS (ms): 1.8]
[Processed Buffers: 19]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.1]
[Object Copy (ms): 10.8]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 14.1]
[GC Worker End (ms): 6154.7]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.6 ms]
[Other: 0.8 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.5 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.0 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 16.0M(16.0M)->0.0B(83.0M) Survivors: 2048.0K->3072.0K Heap: 158.6M(257.0M)->143.9M(257.0M)]
[Times: user=0.01 sys=0.00, real=0.01 secs]
2018-12-12 10:55:34,505 INFO [ControllerEventThread controllerId=0] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) [controller-event-thread]
2018-12-12 10:55:34,523 INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-topic]
2018-12-12 10:55:34,531 INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Rebalance]
2018-12-12 10:55:34,531 INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Heartbeat]
2018-12-12 10:55:34,536 INFO Creating /controller (is it secure? false) (kafka.zk.KafkaZkClient) [controller-event-thread]
2018-12-12 10:55:34,553 INFO Result of znode creation at /controller is: OK (kafka.zk.KafkaZkClient) [controller-event-thread]
2018-12-12 10:55:34,563 INFO [Controller id=0] 0 successfully elected as the controller (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:34,564 INFO [Controller id=0] Reading controller epoch from ZooKeeper (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:34,601 INFO [Controller id=0] Incrementing controller epoch in ZooKeeper (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:34,618 INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator) [main]
2018-12-12 10:55:34,639 INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator) [main]
2018-12-12 10:55:34,653 INFO [Controller id=0] Epoch incremented to 1 (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:34,653 INFO [Controller id=0] Registering handlers (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:34,661 INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 16 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2018-12-12 10:55:34,669 INFO [Controller id=0] Deleting log dir event notifications (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:34,678 INFO [Controller id=0] Deleting isr change notifications (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:34,697 INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1(kafka.coordinator.transaction.ProducerIdManager) [main]
2018-12-12 10:55:34,702 INFO [Controller id=0] Initializing controller context (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:34,813 DEBUG [Controller id=0] Register BrokerModifications handler for Set(0) (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:34,831 DEBUG [Channel manager on controller 0]: Controller 0 trying to connect to broker 0 (kafka.controller.ControllerChannelManager) [controller-event-thread]
2018-12-12 10:55:34,904 INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) [main]
2018-12-12 10:55:34,930 INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) [main]
2018-12-12 10:55:34,932 INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) [TxnMarkerSenderThread-0]
2018-12-12 10:55:35,041 INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread) [Controller-0-to-broker-0-send-thread]
2018-12-12 10:55:35,044 INFO [Controller id=0] Partitions being reassigned: Map() (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,045 INFO [Controller id=0] Currently active brokers in the cluster: Set(0) (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,047 INFO [Controller id=0] Currently shutting brokers in the cluster: Set() (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,047 INFO [Controller id=0] Current list of topics in the cluster: Set() (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,050 INFO [Controller id=0] Fetching topic deletions in progress (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,072 INFO [Controller id=0] List of topics to be deleted: (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,076 INFO [Controller id=0] List of topics ineligible for deletion: (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,076 INFO [Controller id=0] Initializing topic deletion manager (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,080 INFO [Controller id=0] Sending update metadata request (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,130 INFO [ReplicaStateMachine controllerId=0] Initializing replica state (kafka.controller.ReplicaStateMachine) [controller-event-thread]
2018-12-12 10:55:35,140 INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) [/config/changes-event-process-thread]
2018-12-12 10:55:35,188 INFO [ReplicaStateMachine controllerId=0] Triggering online replica state changes (kafka.controller.ReplicaStateMachine) [controller-event-thread]
2018-12-12 10:55:35,275 INFO [ReplicaStateMachine controllerId=0] Started replica state machine with initial state -> Map() (kafka.controller.ReplicaStateMachine) [controller-event-thread]
2018-12-12 10:55:35,278 INFO [PartitionStateMachine controllerId=0] Initializing partition state (kafka.controller.PartitionStateMachine) [controller-event-thread]
2018-12-12 10:55:35,280 INFO [PartitionStateMachine controllerId=0] Triggering online partition state changes (kafka.controller.PartitionStateMachine) [controller-event-thread]
2018-12-12 10:55:35,285 INFO [SocketServer brokerId=0] Started processors for 4 acceptors (kafka.network.SocketServer) [main]
2018-12-12 10:55:35,287 INFO [PartitionStateMachine controllerId=0] Started partition state machine with initial state -> Map() (kafka.controller.PartitionStateMachine) [controller-event-thread]
2018-12-12 10:55:35,287 INFO [Controller id=0] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,295 INFO Kafka version : 2.0.0 (org.apache.kafka.common.utils.AppInfoParser) [main]
2018-12-12 10:55:35,295 INFO Kafka commitId : 3402a8361b734732 (org.apache.kafka.common.utils.AppInfoParser) [main]
2018-12-12 10:55:35,299 INFO [KafkaServer id=0] started (kafka.server.KafkaServer) [main]
2018-12-12 10:55:35,307 INFO [Controller id=0] Removing partitions Set() from the list of reassigned partitions in zookeeper (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,312 INFO [Controller id=0] No more partitions need to be reassigned. Deleting zk path /admin/reassign_partitions (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,366 INFO [Controller id=0] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,369 INFO [Controller id=0] Partitions that completed preferred replica election: (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,370 INFO [Controller id=0] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,370 INFO [Controller id=0] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,373 INFO [Controller id=0] Starting preferred replica leader election for partitions (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,406 INFO [Controller id=0] Starting the controller scheduler (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:35,750 INFO [RequestSendThread controllerId=0] Controller 0 connected to my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9091 (id: 0 rack: ) for sending state change requests (kafka.controller.RequestSendThread) [Controller-0-to-broker-0-send-thread]
2018-12-12 10:55:35,800 TRACE [Controller id=0 epoch=1] Received response {error_code=0} for request UPDATE_METADATA with correlation id 0 sent to broker my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9091 (id: 0 rack: ) (state.change.logger) [Controller-0-to-broker-0-send-thread]
2018-12-12 10:55:37,237 INFO [Controller id=0] Newly added brokers: 2, deleted brokers: , all live brokers: 0,2 (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:37,238 DEBUG [Channel manager on controller 0]: Controller 0 trying to connect to broker 2 (kafka.controller.ControllerChannelManager) [controller-event-thread]
2018-12-12 10:55:37,357 INFO [Controller id=0] New broker startup callback for 2 (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:37,359 DEBUG [Controller id=0] Register BrokerModifications handler for Vector(2) (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:37,379 INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread) [Controller-0-to-broker-2-send-thread]
2018-12-12 10:55:37,385 TRACE [Controller id=0 epoch=1] Received response {error_code=0} for request UPDATE_METADATA with correlation id 1 sent to broker my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9091 (id: 0 rack: ) (state.change.logger) [Controller-0-to-broker-0-send-thread]
2018-12-12 10:55:37,958 INFO [RequestSendThread controllerId=0] Controller 0 connected to my-cluster-kafka-2.my-cluster-kafka-brokers.kafka.svc.cluster.local:9091 (id: 2 rack: ) for sending state change requests (kafka.controller.RequestSendThread) [Controller-0-to-broker-2-send-thread]
2018-12-12 10:55:38,005 TRACE [Controller id=0 epoch=1] Received response {error_code=0} for request UPDATE_METADATA with correlation id 0 sent to broker my-cluster-kafka-2.my-cluster-kafka-brokers.kafka.svc.cluster.local:9091 (id: 2 rack: ) (state.change.logger) [Controller-0-to-broker-2-send-thread]
2018-12-12 10:55:39,158 INFO [Controller id=0] Newly added brokers: 1, deleted brokers: , all live brokers: 0,1,2 (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:39,158 DEBUG [Channel manager on controller 0]: Controller 0 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) [controller-event-thread]
2018-12-12 10:55:39,253 INFO [Controller id=0] New broker startup callback for 1 (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:39,254 DEBUG [Controller id=0] Register BrokerModifications handler for Vector(1) (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:39,258 TRACE [Controller id=0 epoch=1] Received response {error_code=0} for request UPDATE_METADATA with correlation id 2 sent to broker my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc.cluster.local:9091 (id: 0 rack: ) (state.change.logger) [Controller-0-to-broker-0-send-thread]
2018-12-12 10:55:39,281 TRACE [Controller id=0 epoch=1] Received response {error_code=0} for request UPDATE_METADATA with correlation id 1 sent to broker my-cluster-kafka-2.my-cluster-kafka-brokers.kafka.svc.cluster.local:9091 (id: 2 rack: ) (state.change.logger) [Controller-0-to-broker-2-send-thread]
2018-12-12 10:55:39,282 INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread) [Controller-0-to-broker-1-send-thread]
2018-12-12 10:55:39,676 INFO [RequestSendThread controllerId=0] Controller 0 connected to my-cluster-kafka-1.my-cluster-kafka-brokers.kafka.svc.cluster.local:9091 (id: 1 rack: ) for sending state change requests (kafka.controller.RequestSendThread) [Controller-0-to-broker-1-send-thread]
2018-12-12 10:55:39,701 TRACE [Controller id=0 epoch=1] Received response {error_code=0} for request UPDATE_METADATA with correlation id 0 sent to broker my-cluster-kafka-1.my-cluster-kafka-brokers.kafka.svc.cluster.local:9091 (id: 1 rack: ) (state.change.logger) [Controller-0-to-broker-1-send-thread]
2018-12-12 10:55:40,425 TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 10:55:40,427 DEBUG [Controller id=0] Preferred replicas by broker Map() (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12T10:58:13.361+0000: 165.029: [GC pause (G1 Evacuation Pause) (young) (initial-mark), 0.0143418 secs]
[Parallel Time: 12.5 ms, GC Workers: 1]
[GC Worker Start (ms): 165029.0]
[Ext Root Scanning (ms): 2.9]
[Update RS (ms): 1.4]
[Processed Buffers: 37]
[Scan RS (ms): 0.1]
[Code Root Scanning (ms): 0.4]
[Object Copy (ms): 7.5]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 12.2]
[GC Worker End (ms): 165041.2]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 1.6 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 1.2 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.1 ms]
[Eden: 83.0M(83.0M)->0.0B(26.0M) Survivors: 3072.0K->8192.0K Heap: 226.9M(257.0M)->148.4M(257.0M)]
[Times: user=0.02 sys=0.00, real=0.01 secs]
2018-12-12T10:58:13.376+0000: 165.043: [GC concurrent-root-region-scan-start]
2018-12-12T10:58:13.380+0000: 165.047: [GC concurrent-root-region-scan-end, 0.0040551 secs]
2018-12-12T10:58:13.380+0000: 165.047: [GC concurrent-mark-start]
2018-12-12T10:58:13.393+0000: 165.061: [GC concurrent-mark-end, 0.0130916 secs]
2018-12-12T10:58:13.393+0000: 165.061: [GC remark 2018-12-12T10:58:13.393+0000: 165.061: [Finalize Marking, 0.0001133 secs] 2018-12-12T10:58:13.393+0000: 165.061: [GC ref-proc, 0.0003198 secs] 2018-12-12T10:58:13.394+0000: 165.061: [Unloading, 0.0131368 secs], 0.0153351 secs]
[Times: user=0.01 sys=0.00, real=0.01 secs]
2018-12-12T10:58:13.409+0000: 165.077: [GC cleanup 148M->148M(257M), 0.0009851 secs]
[Times: user=0.00 sys=0.00, real=0.01 secs]
2018-12-12 11:00:40,428 TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 11:00:40,428 DEBUG [Controller id=0] Preferred replicas by broker Map() (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12T11:03:32.419+0000: 484.087: [GC pause (G1 Evacuation Pause) (young), 0.0107972 secs]
[Parallel Time: 10.2 ms, GC Workers: 1]
[GC Worker Start (ms): 484086.7]
[Ext Root Scanning (ms): 1.9]
[Update RS (ms): 1.2]
[Processed Buffers: 20]
[Scan RS (ms): 0.0]
[Code Root Scanning (ms): 0.4]
[Object Copy (ms): 6.5]
[Termination (ms): 0.0]
[Termination Attempts: 1]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 10.1]
[GC Worker End (ms): 484096.8]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 0.5 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 0.2 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 26.0M(26.0M)->0.0B(76.0M) Survivors: 8192.0K->1024.0K Heap: 174.4M(257.0M)->146.5M(257.0M)]
[Times: user=0.01 sys=0.00, real=0.01 secs]
2018-12-12 11:05:34,640 INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2018-12-12 11:05:40,429 TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 11:05:40,430 DEBUG [Controller id=0] Preferred replicas by broker Map() (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 11:10:40,430 TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) [controller-event-thread]
2018-12-12 11:10:40,431 DEBUG [Controller id=0] Preferred replicas by broker Map() (kafka.controller.KafkaController) [controller-event-thread]
Ok, so the node and the brokers are configured for the IP 192.168.65.3. So we need to find out if this IP / ports are accessible or not. What error do you get when you try to connect to 192.168.65.3: 32423
with a Kafka client? Can you open at least a telnet connection to it?
From nodejs client, It just says [publisher] buffer disabled
From kafka-console-producer,
[2018-12-12 21:33:22,267] WARN [Producer clientId=console-producer] Connection to node -1 (/192.168.65.3:31286) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Errors are mostly "Broker may not be available"
Can you telnet it? Just something like telnet 192.168.65.3 32423
or telnet 192.168.65.3 31286
? Sorry, I'm not sure whether telnet still exists on Windows :-o
hm.... I don't know.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-cluster-kafka-0 NodePort 10.99.229.52 <none> 9094:31227/TCP 1d
service/my-cluster-kafka-1 NodePort 10.98.240.60 <none> 9094:31141/TCP 1d
service/my-cluster-kafka-2 NodePort 10.109.88.222 <none> 9094:31390/TCP 1d
service/my-cluster-kafka-bootstrap ClusterIP 10.105.103.73 <none> 9091/TCP,9092/TCP,9093/TCP 1d
service/my-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP,9093/TCP 1d
service/my-cluster-kafka-external-bootstrap NodePort 10.111.173.225 <none> 9094:31286/TCP 1d
service/my-cluster-zookeeper-client ClusterIP 10.102.194.217 <none> 2181/TCP 1d
service/my-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 1d
In this setup, If I do telnet 192.168.65.3 31286
, telnet 192.168.65.3 31227
I get the error message of "Cannot connect to .... and port ... "
If I do telnet localhost 31286
, telnet localhost 31227
I could access it from my nodejs client or kafka-console-producer.bat or kafka-console-consumer.bat I could access it by localhost:32186
or localhost:31227
, but If I send some messages I get this from nodejs client Unhandled rejection TimeoutError: Request timed out after 30000ms
and from the kafka-console-producer.bat
[2018-12-14 15:12:48,056] WARN [Producer clientId=console-producer] Connection to node 1 (/192.168.65.3:31141) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-12-14 15:13:09,057] WARN [Producer clientId=console-producer] Connection to node 0 (/192.168.65.3:31227) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-12-14 15:13:30,057] WARN [Producer clientId=console-producer] Connection to node 2 (/192.168.65.3:31390) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Maybe It is time for me to dual boot ubuntu on my laptop or buy a mac book. I first started with minikube, but I had trouble working any stable basic setup. That's why I changed it to Docker's kubernetes. This exposing kafka problem happened not only with strimzi but also bitnami/kafka or Yolean/kubernetes-kafka. I could see all the nodePorts, but not working... anyway, sorry to bother you with this problem. thanks for helping
I'm sorry I couldn't help more. This is really a bit tricky and especially in the environments like Minikube and other local Kubernetes alternatives it sometimes doesn't work perfectly.
Nope, you helped me kind of how to find some problem here. I don't have a time right now, but I will test it with other OS with different k8s setup and post a follow-up comment if necessary.
Can I ask you some quick question? It relates to one of some reasons why I dig exposing kafka outside of k8s cluster.
Do you think other application layers should be deployed together with kafka thing?(kafka, zookeeper, kafka manager, logging, monitoring or whatever things that manage kafka)
Application layers such as gateway server, auth server, producer/consumer clients, stream-processing server, etc...
I feel like deploying all together sounds ok since we are using k8s, but in some sense stateful containers like kafka, db looks like it should be deployed seperately so that It is easy to manage it(affinity/anti-affinity, assign appropriate machine?)
I'm a newbie to both kafka and Devop that's why I have a lot of questions how it should be on production.
I think it depends. If you have stuff already running outside of Kubernetes, it doesn't always make sense to move them into Kubernetes just for the sake of it. For new stuff, it might make sense to start directly on Kubernetes.
I know this is an old thread but I'm doing something similar and have some info to add in case it helps anyone.
I was digging through this, trying to set up strimzi on localhost docker desktop hosted k8s cluster.
I noted that the docker subnet is 192.168.65.0/28
The port discovered from
kubectl get service {cluster-name}-kafka-external-bootstrap -o=jsonpath='{.spec.ports[0].nodePort}{"\n"}' -n kafka
Was 32157
and address
192.168.65.4
I can't connect via 192.168.65.4:32157
But I can connect to the bootstrap server via localhost:32157
And I can also connect via my {local network ip address}:32157 ( as in NOT the docker subnet node address )
Much the same way i can connect to other web applications running in my local k8 via localhost:
The problem I'm having now is that as soon as the first node's address came back it was back at '192.168.65.4'
Connection to node 0 (/192.168.65.4:30053) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Going to see if I can force the advertised address to be 'localhost' in the strimzi node port config.
Via something like
- name: external
port: 9094
type: nodeport
tls: false
configuration:
brokers:
- broker: 0
advertisedHost: localhost
UPDATE:
From broker startup logs:
EXTERNAL-9094://localhost:30026
kafka-console-consumer.bat --bootstrap-server localhost:30026 --topic whatev --from-beginning
Connected :)
Hi, I'm new to both kafka and kubernetes, I've read docs and found some issues about expose kafka, but I couldn't make it happened. Connecting inside of k8s cluster is working by this
my-cluster-kafka-bootstrap:9092
I'm on Window, Docker's kubernetes. I applied this s document said
This is the result of
kubectl get all -n kafka
I tried to access kafka from my own client by localhost:9092, localhost:32423, localhost:30341, or 127.0.0.1, or 192.168.65.3(docker for desktop ip)
I feel like I'm almost there... need some help