Open baconalot opened 9 years ago
Hey @baconalot please checkout the latest master, we got rid of using the allPartitionMap
(which is the topic-partition map used by fetcher manager) in fetcherRoutines and this problem should vanish now. Let us know though if doesn't work as expected for you.
Thanks!
Hi @serejja that helps with the crash, which does not occur anymore. But.. now either one of the pids get all the partitions or nothing. Is is arbitrary if the latest started one gets the partitions or they remain with the first started, but they never balance like: pid1 -> part1 + part2 [enter pid2] pid1 -> part2 & pid2 -> part1.
Hi @baconalot,
sorry for getting this abandoned. Does this still occur? Lots of changes were made since then including fixing lots of rebalance issues.
Thanks!
Usecase: Run N chronos/mesos jobs for a singe consumergroup where N == the number of partitions in the topic. (Lets assume a single topic consumergroup here)
Test: -Create a go consumer that eats from a large local kafka/topic with 2 partitions. -Start once (pid 1) -> looks ok, alternates between partition 0 and 1 -Start another (pid 2) -> looks ok, consumes only from one partition -But pid 1 is then crashed with following stack:
Fix: I am not sure if this has some unwanted side effects but I was able to fix this in
go_kafka_client.fetcher.go
: