lensesio / fast-data-dev

Kafka Docker for development. Kafka, Zookeeper, Schema Registry, Kafka-Connect, , 20+ connectors
https://lenses.io
Apache License 2.0
2.02k stars 333 forks source link

Producer Timeout #83

Open lagorsse opened 6 years ago

lagorsse commented 6 years ago

Hi,

I am trying to use the kafka docker on windows directly in docker (without vm) docker run --rm -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 -p 9581-9585:9581-9585 -p 9092:9092 landoop/fast-data-dev:latest

I don't have the issue with the vm, so I believe the code is ok.

It used to work fine but now I got timeout on my producer, however client connect and get ready and topics are well created, just no data and timeout. I check logs but didn't find any useful information, but I may miss something important.

here is the end of the broker.log from topic creation, do you have any clues?

2018-10-26 10:15:58,094] INFO [Log partition=topic5-0, dir=/data/kafka/logdir] Loading producer state from offset 0 with message format version 2 (kafka.log.Log) [2018-10-26 10:15:58,094] INFO [Log partition=topic5-0, dir=/data/kafka/logdir] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log) [2018-10-26 10:15:58,094] INFO Created log for partition topic5-0 in /data/kafka/logdir with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) [2018-10-26 10:15:58,094] INFO [Partition topic5-0 broker=0] No checkpointed highwatermark is found for partition topic5-0 (kafka.cluster.Partition) [2018-10-26 10:15:58,094] INFO Replica loaded for partition topic5-0 with initial high watermark 0 (kafka.cluster.Replica) [2018-10-26 10:15:58,095] INFO [Partition topic5-0 broker=0] topic5-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [2018-10-26 10:15:58,095] INFO [ReplicaAlterLogDirsManager on broker 0] Added fetcher for partitions List() (kafka.server.ReplicaAlterLogDirsManager)

Services Logs:

Starting services. This is Landoop’s fast-data-dev. Kafka 1.1.1-L0 (Landoop's Kafka Distribution). You may visit http://localhost:3030 in about a minute. 2018-10-26 09:50:13,180 INFO Included extra file "/etc/supervisord.d/01-zookeeper.conf" during parsing 2018-10-26 09:50:13,180 INFO Included extra file "/etc/supervisord.d/02-broker.conf" during parsing 2018-10-26 09:50:13,180 INFO Included extra file "/etc/supervisord.d/03-schema-registry.conf" during parsing 2018-10-26 09:50:13,180 INFO Included extra file "/etc/supervisord.d/04-rest-proxy.conf" during parsing 2018-10-26 09:50:13,180 INFO Included extra file "/etc/supervisord.d/05-connect-distributed.conf" during parsing 2018-10-26 09:50:13,180 INFO Included extra file "/etc/supervisord.d/06-caddy.conf" during parsing 2018-10-26 09:50:13,180 INFO Included extra file "/etc/supervisord.d/07-smoke-tests.conf" during parsing 2018-10-26 09:50:13,181 INFO Included extra file "/etc/supervisord.d/08-logs-to-kafka.conf" during parsing 2018-10-26 09:50:13,181 INFO Included extra file "/etc/supervisord.d/99-supervisord-sample-data.conf" during parsing 2018-10-26 09:50:13,181 INFO Set uid to user 0 succeeded 2018-10-26 09:50:13,192 INFO RPC interface 'supervisor' initialized 2018-10-26 09:50:13,192 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2018-10-26 09:50:13,192 INFO supervisord started with pid 6 2018-10-26 09:50:14,196 INFO spawned: 'sample-data' with pid 162 2018-10-26 09:50:14,199 INFO spawned: 'zookeeper' with pid 163 2018-10-26 09:50:14,203 INFO spawned: 'caddy' with pid 165 2018-10-26 09:50:14,207 INFO spawned: 'broker' with pid 166 2018-10-26 09:50:14,212 INFO spawned: 'smoke-tests' with pid 168 2018-10-26 09:50:14,220 INFO spawned: 'connect-distributed' with pid 174 2018-10-26 09:50:14,222 INFO spawned: 'logs-to-kafka' with pid 177 2018-10-26 09:50:14,231 INFO spawned: 'schema-registry' with pid 188 2018-10-26 09:50:14,234 INFO spawned: 'rest-proxy' with pid 190 2018-10-26 09:50:15,225 INFO success: sample-data entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2018-10-26 09:50:15,225 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2018-10-26 09:50:15,226 INFO success: caddy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2018-10-26 09:50:15,226 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2018-10-26 09:50:15,226 INFO success: smoke-tests entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2018-10-26 09:50:15,226 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2018-10-26 09:50:15,226 INFO success: logs-to-kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2018-10-26 09:50:15,247 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2018-10-26 09:50:15,247 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2018-10-26 09:51:45,687 INFO exited: logs-to-kafka (exit status 0; expected) 2018-10-26 09:52:17,906 INFO exited: sample-data (exit status 0; expected) 2018-10-26 09:53:23,649 INFO exited: smoke-tests (exit status 0; expected)