Closed grzesuav closed 2 years ago
Without this feature, the latest versions of the kubernetes-exporter are not suitable for us, almost no events are reported in ElasticSearch...
The original repo already has an open pull request to this feature. Maybe the owner can add it here. https://github.com/opsgenie/kubernetes-event-exporter/pull/171
It is really useful to monitor larger cluster with a high number of events.
Hi, I can manually checkout that code to merge, but pinging @lobshunt just in case. lobshunt If no response, I can merge it later today or tomorrow if it's OK?
Seems like I cannot ping arbitrary people from other repository. So I'm merging this and some other PRs there manually today. Stay tuned.
I'm trying to come up with sensible defaults, any opinions? Defaults in the rest/config.go is as follows:
// QPS indicates the maximum QPS to the master from this client.
// If it's zero, the created RESTClient will use DefaultQPS: 5
QPS float32
// Maximum burst for throttle.
// If it's zero, the created RESTClient will use DefaultBurst: 10.
Burst int
These kind of configs are the ones you should tune for your context. For new and small clusters the defaults DefaultQPS:5
and DefaultBurst:10
are enough. Its hard for you to set a sensible default because you just don't know in which context this tool will be used.
In your place I keep it as the original defaults.
Makes sense, I'll note it in README clearly to point out you need to set them properly for large clusters.
I'd wondered if this would solve the original issue reported in https://github.com/opsgenie/kubernetes-event-exporter/issues/159
I've seen the same, after some time the exporter just stops dumping events to sinks, usually in high traffic clusters. Restarting gets it dumping again.
Hi y'all, I applied the solution proposed on https://github.com/opsgenie/kubernetes-event-exporter/pull/171 to my fork and it fixed the problem for me. I wonder if we can apply the same here since I don't want to keep a fork for it :grimacing: I can do the PR if needed.
Sorry for the long release delay. I merged the changes and the release will be available soon!
Hi, I can manually checkout that code to merge, but pinging @lobshunt just in case. lobshunt If no response, I can merge it later today or tomorrow if it's OK?
Just found this new active fork. I am glad to see this fix finally got merged. 🎉
I was going to add this too! I'm glad to see this.
I see throttling messages
would be good to be able to tune that setting