Closed TakumaNakagame closed 5 years ago
@TakumaNakagame
Thanks for reporting this. It seems like goroutine leak. I'll fix this ASAP.
@yamamoto-febc , Thank you for the response. I see! understood. Please fix it. I am always saved!
I merged #19. It includes the improvement of goroutines handling.
The following image is the comparison of number of goroutines before and after merging.
After merging, the number of leaked goroutines decreased. (and it probably reduces memory usage)
UPDATE
Latest master has some unfixed bugs. So please wait for a while until #20 is fixed.
@TakumaNakagame
We released v0.7.0. https://github.com/sacloud/sakuracloud_exporter/releases/tag/0.7.0
It includes some improvements for handling leaked goroutines. Please try it.
@yamamoto-febc
Thank you Release! I will use it immediately! It was very helpful!
@yamamoto-febc
I tried it. Increase in memory is suppressed!
Thank you for the correction! This issue is okay as a close.
I'm going to close this. Feel free to re-open if this problem happens again.
Exporter will be regularly OOMkilled. For Pod,
resource.limits.memory
is set to 100 MiB. Pod will periodically use up to 100MB of memory and will be killed.If you check the memory usage of Pod in Prometheus, the memory usage continues to increase. And it was OOMkilled around 100 MiB.
Memory Usage Graph
container_memory_usage_bytes {container =" sakuracloud-exporter "}
Pod Details
kubectl get pod -o yaml sakuracloud-exporter