Telefonica / prometheus-kafka-adapter

Use Kafka as a remote storage database for Prometheus (remote write only)
Apache License 2.0
364 stars 135 forks source link

remote write size larger than 104857600 #106

Open jerryum opened 1 year ago

jerryum commented 1 year ago
[2023-02-08 19:56:35,993] WARN [SocketServer listenerType=ZK_BROKER, nodeId=0] Unexpected error from /10.138.0.12 (channelId=10.32.2.15:9092-10.138.0.12:37806-76); closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1347375956 larger than 104857600)
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:105)
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
        at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
        at kafka.network.Processor.poll(SocketServer.scala:1055)
        at kafka.network.Processor.run(SocketServer.scala:959)
        at java.base/java.lang.Thread.run(Thread.java:829)

This is what I received when I connect prometheus-kafka-adapter to Prometheus. Modified the max receive size of Kafka lager than 1347375956 but still getting the same error. Any advise will be welcome!

palmerabollo commented 1 year ago

Hi @jerryum, that's a big message. I think it has nothing to do with prometheus-kafka-adapter, but with Kafka config itself. Could you try to increase the message.max.bytes in your Kafka brokers? Is that what you changed?

jerryum commented 1 year ago

yes, that's what I did... Couldn't find the solution and ... forked the the repo and modified the adapter to create two different Kafka topics by exporters to reduce the message size....Metrics for the pods are too many... separated the metrics - one topic for the pods and the other topic for the rest of metrics.

roshan989 commented 9 months ago

Hi @jerryum,

I faced a similar issue with Spark writes. I believe you may need to adjust the producer properties, specifically with max.request.size. Please take a look at this resource: How to Send Large Messages in Apache Kafka.

You might need to make changes to the producer configuration in the adapter code or tweak some settings in the configuration. I'll update you once I find the necessary changes.