Open leegin opened 4 years ago
Same problem here. Kafka 2.2.0 Burrow 1.3.0, 1.32, 1.3.3 - none of them is working. Always receiving cluster or consumer not found message and trying to get consumer group status or lag....
After specify kafka-version on client-profile section, it worked.
[cluster.prd-kafka] class-name="kafka" client-profile="default-client" ...
[client-profile.default-client] kafka-version="2.0.0"
I have a dockerized burrow setup to monitor my dev kafka environment. When I check /v3/kafka/kafka-dev/consumer, it lists only 2 consumers.
"consumers": [ "adept-tracker-atkins", "burrow-kafka-dev" ], "request": { "url": "/v3/kafka/kafka-dev/consumer", "host": "a9720a5c007a" } }
But I have 15 consumers in total. All the consumers are committing offsets to kafka. When I restart the burrow container it lists all the consumers but their status will be "NOT FOUND".
{"error":false,"message":"consumer list returned","consumers":["console-consumer-22451","adept-egress-processing-generic2","console-consumer-83905","adept-tracker-atkins-dev","connect-ingress-gps-location-s3","connect-egress-gps-location-s3","console-consumer-92438","burrow-kafka-dev","connect-driving-events-egress-log-s3","connect-batch-reject-s3","connect-egress-gps-location-log-s3","connect-reject-gps-location-s3","adept-tracker-atkins","connect-reject-ingress-gps-location-s3","console-consumer-24933","adept-egress-processing-generic1","connect-driving-events-validated-enriched-s3","adept-egress-processing-tomtom","connect-driving-events-ingress-raw-rejects-s3","connect-driving-events-ingress-raw-s3","adept-egress-processing-example","adept-stream-core-processing-drivingevents"],"request":{"url":"/v3/kafka/kafka-dev/consumer","host":"e0275992bad7"}}
My burrow.toml file is as follows.
[general] pidfile="burrow.pid" stdout-logfile="burrow.out"
[logging] filename="logs/burrow.log" level="info" maxsize=100 maxbackups=30 maxage=10 use-localtime=false use-compression=true
[zookeeper] servers=[ "zookeeper:2181" ] timeout=6 root-path="/burrow"
[client-profile.kafka10] kafka-version="0.10.1.0" client-id="burrow-client"
[client-profile.zk-kafka10] kafka-version="0.10.1.0" client-id="burrow-client"
[cluster.kafka-dev] class-name="kafka" servers=[ "kafka:9092" ] client-profile="kafka10" topic-refresh=60 offset-refresh=30
[consumer.kafka-dev] class-name="kafka" cluster="kafka-dev" client-profile="kafka10" servers=[ "kafka:9092" ] group-blacklist="^(console-consumer-|python-kafka-consumer-).*$" group-whitelist=""
[consumer.kafka-dev_zk] class-name="kafka_zk" cluster="kafka-dev" client-profile="zk-kafka10" servers=[ "zookeeper:2181" ] zookeeper-timeout=30 group-blacklist="^(console-consumer-|python-kafka-consumer-).*$" group-whitelist=""
[httpserver.default] address=":8005"
[storage.default] class-name="inmemory" workers=20 intervals=15 expire-group=604800 min-distance=1
@toddpalino Can you help me here to get this working. I have to go to production soon.