Open jerryum opened 1 year ago
Hi @jerryum, that's a big message. I think it has nothing to do with prometheus-kafka-adapter, but with Kafka config itself. Could you try to increase the message.max.bytes
in your Kafka brokers? Is that what you changed?
yes, that's what I did... Couldn't find the solution and ... forked the the repo and modified the adapter to create two different Kafka topics by exporters to reduce the message size....Metrics for the pods are too many... separated the metrics - one topic for the pods and the other topic for the rest of metrics.
Hi @jerryum,
I faced a similar issue with Spark writes. I believe you may need to adjust the producer properties, specifically with max.request.size
. Please take a look at this resource: How to Send Large Messages in Apache Kafka.
You might need to make changes to the producer configuration in the adapter code or tweak some settings in the configuration. I'll update you once I find the necessary changes.
This is what I received when I connect prometheus-kafka-adapter to Prometheus. Modified the max receive size of Kafka lager than 1347375956 but still getting the same error. Any advise will be welcome!