redpanda-data / kminion

KMinion is a feature-rich Prometheus exporter for Apache Kafka written in Go. It is lightweight and highly configurable so that it will meet your requirements.
MIT License
615 stars 123 forks source link

kminion fails to export metrics when a broker is stopping, starting or restarting #253

Open dattagiri opened 6 months ago

dattagiri commented 6 months ago

We have observed that kminion fails to report every metrics when a kafka broker is starting, stopping or restarting. Once the kafka process stop or start is completed, it starts reporting the metrics again as expected.

weeco commented 6 months ago

Hey, are you using the endToEnd reporting? If so, I believe I'm already looking into this (have not yet found a proper solution though, here's the draft PR: https://github.com/redpanda-data/kminion/pull/252).

dattagiri commented 6 months ago

We are not using end to end reporting. We are just scraping the consumer lag metrics with adminAPI/offsetsTopic scraping. It's same for both types of scraping Modes. Even disabling of logDirs is not changing the outcome :(

weeco commented 6 months ago

Okay I need more details then, ideally instructions to reproduce this. This should definitely not happen. Any suspicious log messages?

dattagiri commented 6 months ago

So far, we haven't seen any suspicious log messages.

This can be reproduced by,

  1. Keep the kafka cluster and kminion running.
  2. Watch for the consumer lag or any metric you want to monitor through prometheus/grafana
  3. Stop the kafka on one broker. This might result in no metrics exposed temporarily(1 or 2 mins, but depends), then it resumes exposing the metrics.
  4. After a while start the kafka service on the broker. This might result in no metrics exposed temporarily(2 or 3 mins, but depends). then it resumes exposing the metrics)

This can be reproduced by just restarting the kafka process too. Please note that, you won't see this absence of metrics every time kafka process is stopping/starting/restarting. However, most of the times it does happen. Sometimes, kafka process stop and start is so quick it won't happen due to the prometheus scraping interval.

dattagiri commented 6 months ago

Few more details wrt the kminion logs when one of the broker's kafka service was stopped/started. The scraping mode is offsetsTopic with logDirs disabled.

# kminion logs while broker a1 was being stopped @15:19

{"level":"info","ts":"2024-03-12T15:19:41.787Z","logger":"main.kgo_client","msg":"metadata update triggered","why":"reload offsets from load failure"}
{"level":"warn","ts":"2024-03-12T15:19:43.087Z","logger":"main.kgo_client","msg":"unable to open connection to broker","addr":"broker-a1:kafka-port","broker":"1001","err":"dial tcp broker-ip:kafka-port: connect: connection refused"}
{"level":"info","ts":"2024-03-12T15:19:43.344Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[46]}"}
{"level":"warn","ts":"2024-03-12T15:19:44.020Z","logger":"main.kgo_client","msg":"unable to open connection to broker","addr":"broker-a1:kafka-port","broker":"1001","err":"dial tcp broker-ip:kafka-port: connect: connection refused"}
{"level":"info","ts":"2024-03-12T15:19:45.802Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[1 5 13 14]}"}
{"level":"info","ts":"2024-03-12T15:19:47.132Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[47 48]}"}
{"level":"warn","ts":"2024-03-12T15:19:47.741Z","logger":"main.kgo_client","msg":"unable to open connection to broker","addr":"broker-a1:kafka-port","broker":"1001","err":"dial tcp broker-ip:kafka-port: connect: connection refused"}
{"level":"info","ts":"2024-03-12T15:19:47.995Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[2 6 10 12 15 19 23 27 33 34 35 36 39 40 44]}"}

# kminion logs after broker a1 was started @15:23

{"level":"info","ts":"2024-03-12T15:25:10.675Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[0]}"}
{"level":"info","ts":"2024-03-12T15:25:10.779Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[3 8 16 20 25 29 42 43 49]}"}
{"level":"info","ts":"2024-03-12T15:25:10.826Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[4 7]}"}
{"level":"info","ts":"2024-03-12T15:25:13.746Z","logger":"main.kgo_client","msg":"metadata update triggered","why":"reload offsets from load failure"}
{"level":"info","ts":"2024-03-12T15:25:14.271Z","logger":"main.kgo_client","msg":"metadata update triggered","why":"reload offsets from load failure"}
{"level":"info","ts":"2024-03-12T15:25:17.210Z","logger":"main.kgo_client","msg":"metadata update triggered","why":"reload offsets from load failure"}
dattagiri commented 6 months ago
Screenshot 2024-03-12 at 15 40 10

Screenshot of conumer lag metrics being absent when above stop and start was happening.

sszp commented 6 months ago

Further debugging this issue shows that kminion does expose the metrics every time we query it, but while brokers are being stopped it takes more time for kminion to generate the metrics, most likely because of the failing connection to the broker being stopped. The typical 6-7sec response time increases to 15-20sec that's more than the scrape timeout on the Prometheus servers, that's what is causing the missing metrics.

The question is whether it's possible to handle the failing broker connections in a way that it doesn't increase the response time.