Open alagiz opened 4 years ago
alright, i know why it happens =>
when i increased the number of partitions to be 2, it works well
although if i make number of partitions 3 while having 2 consumers, the following occurs
consumer-1_1 | Got assigned: [ queueing.job.test[0:#], queueing.job.test[1:#] ]
consumer-0_1 | Got assigned: [ queueing.job.test[2:#] ]
# send message
consumer-0_1 | {"userId":"jimmy","jobId":"jimmy_46d74824-27fb-4249-ab66-2dc42e1d455c","jobStep":0,"isJobDone":false}
# send message
consumer-1_1 | {"userId":"jimmy","jobId":"jimmy_83b0561a-05d0-499b-abf8-87fd1067ff8c","jobStep":0,"isJobDone":false}
# send message
consumer-1_1 | {"userId":"jimmy","jobId":"jimmy_b5816ece-34e2-4e26-a821-d2aa8e8a9d53","jobStep":0,"isJobDone":false}
i will be looking into how to manage dynamic amount of consumers to achieve round-robin message distribution.
should i adjust number of partitions when a new consumer is added/removed?
hello!
i have the following desired situation:
i couldn't achieve that, thus creating this issue
details:
code
consumer.cpp
```cpp #includelogs
in case i use consumer.assign() => no message seems to be consumed when i produce a message, kafka also doesn't seem to see the consumers
kafka logs
``` [2020-06-16 11:48:32,280] INFO Creating topic queueing.job.test with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(1001)) (kafka.zk.AdminZkClient) [2020-06-16 11:48:32,291] INFO [KafkaApi-1001] Auto creation of topic queueing.job.test with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis) [2020-06-16 11:48:32,304] INFO [ReplicaFetcherManager on broker 1001] Removed fetcher for partitions Set(queueing.job.test-0) (kafka.server.ReplicaFetcherManager) [2020-06-16 11:48:32,307] INFO [Log partition=queueing.job.test-0, dir=/kafka/kafka-logs-614330c7acc4] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [2020-06-16 11:48:32,307] INFO [Log partition=queueing.job.test-0, dir=/kafka/kafka-logs-614330c7acc4] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log) [2020-06-16 11:48:32,308] INFO Created log for partition queueing.job.test-0 in /kafka/kafka-logs-614330c7acc4/queueing.job.test-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.5-IV0, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [2020-06-16 11:48:32,310] INFO [Partition queueing.job.test-0 broker=1001] No checkpointed highwatermark is found for partition queueing.job.test-0 (kafka.cluster.Partition) [2020-06-16 11:48:32,310] INFO [Partition queueing.job.test-0 broker=1001] Log loaded for partition queueing.job.test-0 with initial high watermark 0 (kafka.cluster.Partition) [2020-06-16 11:48:32,310] INFO [Partition queueing.job.test-0 broker=1001] queueing.job.test-0 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition) ```in case i use consumer.subscribe() => a message is consumed but only by one of the consumers (depending on initialization order)
kafka logs
``` [2020-06-16 11:40:01,556] INFO Creating topic queueing.job.test with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(1001)) (kafka.zk.AdminZkClient) [2020-06-16 11:40:01,566] INFO [KafkaApi-1001] Auto creation of topic queueing.job.test with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis) [2020-06-16 11:40:01,578] INFO [ReplicaFetcherManager on broker 1001] Removed fetcher for partitions Set(queueing.job.test-0) (kafka.server.ReplicaFetcherManager) [2020-06-16 11:40:01,580] INFO [Log partition=queueing.job.test-0, dir=/kafka/kafka-logs-00d6455ecb18] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [2020-06-16 11:40:01,581] INFO [Log partition=queueing.job.test-0, dir=/kafka/kafka-logs-00d6455ecb18] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log) [2020-06-16 11:40:01,581] INFO Created log for partition queueing.job.test-0 in /kafka/kafka-logs-00d6455ecb18/queueing.job.test-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.5-IV0, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [2020-06-16 11:40:01,583] INFO [Partition queueing.job.test-0 broker=1001] No checkpointed highwatermark is found for partition queueing.job.test-0 (kafka.cluster.Partition) [2020-06-16 11:40:01,583] INFO [Partition queueing.job.test-0 broker=1001] Log loaded for partition queueing.job.test-0 with initial high watermark 0 (kafka.cluster.Partition) [2020-06-16 11:40:01,583] INFO [Partition queueing.job.test-0 broker=1001] queueing.job.test-0 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition) [2020-06-16 11:40:02,940] INFO [GroupCoordinator 1001]: Dynamic Member with unknown member id joins group testGroup in Empty state. Created a new member id rdkafka-da354c26-f61f-479c-954e-60ed69eb0a54 for this member and add to the group. (kafka.coordinator.group.GroupCoordinator) [2020-06-16 11:40:02,941] INFO [GroupCoordinator 1001]: Preparing to rebalance group testGroup in state PreparingRebalance with old generation 0 (__consumer_offsets-49) (reason: Adding new member rdkafka-da354c26-f61f-479c-954e-60ed69eb0a54 with group instance id None) (kafka.coordinator.group.GroupCoordinator) [2020-06-16 11:40:02,943] INFO [GroupCoordinator 1001]: Stabilized group testGroup generation 1 (__consumer_offsets-49) (kafka.coordinator.group.GroupCoordinator) [2020-06-16 11:40:02,950] INFO [GroupCoordinator 1001]: Assignment received from leader for group testGroup for generation 1 (kafka.coordinator.group.GroupCoordinator) [2020-06-16 11:40:03,535] INFO [GroupCoordinator 1001]: Dynamic Member with unknown member id joins group testGroup in Stable state. Created a new member id rdkafka-f0785b57-0660-4b54-a924-ae41c9cc8877 for this member and add to the group. (kafka.coordinator.group.GroupCoordinator) [2020-06-16 11:40:03,536] INFO [GroupCoordinator 1001]: Preparing to rebalance group testGroup in state PreparingRebalance with old generation 1 (__consumer_offsets-49) (reason: Adding new member rdkafka-f0785b57-0660-4b54-a924-ae41c9cc8877 with group instance id None) (kafka.coordinator.group.GroupCoordinator) [2020-06-16 11:40:05,045] INFO [GroupCoordinator 1001]: Stabilized group testGroup generation 2 (__consumer_offsets-49) (kafka.coordinator.group.GroupCoordinator) [2020-06-16 11:40:05,052] INFO [GroupCoordinator 1001]: Assignment received from leader for group testGroup for generation 2 (kafka.coordinator.group.GroupCoordinator) ```@accelerated could you, perhaps, help me out with that one?