Telefonica / prometheus-kafka-adapter

Use Kafka as a remote storage database for Prometheus (remote write only)
Apache License 2.0
364 stars 135 forks source link

Topic is being created OK, but no messages are produced #130

Open jseparovic opened 8 months ago

jseparovic commented 8 months ago

Hi,

I can't seem to get the adapter to produce any messages to the topic.

It creates the topic ok, but it's empty.

prometheus-kafka-adapter  | %5|1707561394.646|PARTCNT|rdkafka#producer-1| [thrd:ssl://kafka:9092/bootstrap]: Topic edge._e59e7552aee847f2876378d36a4678ea._44b4e6be1f78ffb8ee7a0225f2c23266.prometheus partition count changed from 1 to 0
prometheus-kafka-adapter  | {"fields.time":"2024-02-10T10:36:39Z","ip":"172.17.0.3","latency":6712271,"level":"info","method":"POST","msg":"","path":"/receive","status":200,"time":"2024-02-10T02:36:39-08:00","user-agent":"Prometheus/2.36.2"}
prometheus-kafka-adapter  | %5|1707561399.802|PARTCNT|rdkafka#producer-1| [thrd:ssl://kafka:9092/bootstrap]: Topic edge._e59e7552aee847f2876378d36a4678ea._44b4e6be1f78ffb8ee7a0225f2c23266.prometheus partition count changed from 1 to 0
prometheus-kafka-adapter  | {"fields.time":"2024-02-10T10:36:48Z","ip":"172.17.0.3","latency":2645279,"level":"info","method":"POST","msg":"","path":"/receive","status":200,"time":"2024-02-10T02:36:48-08:00","user-agent":"Prometheus/2.36.2"}
prometheus-kafka-adapter  | {"fields.time":"2024-02-10T10:36:48Z","ip":"172.17.0.3","latency":3263900,"level":"info","method":"POST","msg":"","path":"/receive","status":200,"time":"2024-02-10T02:36:48-08:00","user-agent":"Prometheus/2.36.2"}
prometheus-kafka-adapter  | {"fields.time":"2024-02-10T10:36:48Z","ip":"172.17.0.3","latency":2962779,"level":"info","method":"POST","msg":"","path":"/receive","status":200,"time":"2024-02-10T02:36:48-08:00","user-agent":"Prometheus/2.36.2"}
prometheus-kafka-adapter  | %5|1707561408.802|PARTCNT|rdkafka#producer-1| [thrd:ssl://kafka:9092/bootstrap]: Topic edge._e59e7552aee847f2876378d36a4678ea._44b4e6be1f78ffb8ee7a0225f2c23266.prometheus partition count changed from 1 to 0
prometheus-kafka-adapter  | {"fields.time":"2024-02-10T10:36:51Z","ip":"172.17.0.3","latency":176755,"level":"info","method":"POST","msg":"","path":"/receive","status":200,"time":"2024-02-10T02:36:51-08:00","user-agent":"Prometheus/2.36.2"}
prometheus-kafka-adapter  | {"fields.time":"2024-02-10T10:36:51Z","ip":"172.17.0.3","latency":40136,"level":"info","method":"POST","msg":"","path":"/receive","status":200,"time":"2024-02-10T02:36:51-08:00","user-agent":"Prometheus/2.36.2"}
prometheus-kafka-adapter  | {"fields.time":"2024-02-10T10:36:53Z","ip":"172.17.0.3","latency":1149837,"level":"info","method":"POST","msg":"","path":"/receive","status":200,"time":"2024-02-10T02:36:53-08:00","user-agent":"Prometheus/2.36.2"}
prometheus-kafka-adapter  | %5|1707561413.810|PARTCNT|rdkafka#producer-1| [thrd:ssl://kafka:9092/bootstrap]: Topic edge._e59e7552aee847f2876378d36a4678ea._44b4e6be1f78ffb8ee7a0225f2c23266.prometheus partition count changed from 1 to 0

Do you think it's related to the partition count message from above?

Under debug I see metrics in the logs no problem:

prometheus-kafka-adapter  | {"fields.time":"2024-02-10T10:35:34Z","ip":"172.17.0.11","latency":7796942,"level":"info","method":"POST","msg":"","path":"/receive","status":200,"time":"2024-02-10T02:35:34-08:00","user-agent":"Prometheus/2.36.2"}
prometheus-kafka-adapter  | {"level":"debug","msg":"","time":"2024-02-10T02:35:34-08:00","var":{"timeseries":[{"labels":[{"name":"__name__","value":"container_network_transmit_packets_dropped_total"},{"name":"container_label_com_docker_compose_config_hash","value":"13be5be05e66adf75f86319d56c52008c9bed89eab3f44923d00f844a2b71604"},{"name":"container_label_com_docker_compose_container_number","value":"1"},
...

Any ideas?

Pucua commented 6 months ago

I reduce datas,err:Failed to obtain reader, failed to marshal fields to JSON, json: unsupported value: NaN,but I don't know how to approve it,maybe metrics are not standard JSON datas or we need transfer part datas. Snipaste_2024-04-19_17-26-26