Open romankor opened 6 years ago
There is no way to change the limit unfortunately. We forked snap to get around this and just hacked in a higher limit.
The proper way to fix it would be to send a PR that fixes this issue: https://github.com/intelsdi-x/snap-plugin-lib-go/issues/43
There is a PR https://github.com/intelsdi-x/snap-plugin-lib-go/pull/89
@daniellee Can you point me to the forked repository that you hacked ? Or is it private ?
We have an issue that the kubestate pod gets recycled every couple of minutes and and cluster metrics are not being send on a cluster of roughly 30 machines and ~1000 pods.
This is what i see in the log file.
Can not figure a way to configure the max size of the message. Maybe you can shed some light on that ? Thanks
We have in running on our dev/qa cluster which is much smaller , and it works there without any problem