Closed IIILSW closed 1 year ago
Good catch and next level debugging!! Do you think that you can open a PR to the prometheus_metrics_core
lib with your changes? Else, another option would be to run the actual flushing inside of a Task
. Once that Task
terminates, the heap memory will be reclaimed. That way you can avoid the prometheus_metrics_core
PR.
Thoughts?
Can you test out this branch and see if it fixes you problem? https://github.com/akoutmos/prom_ex/pull/200
Thank you for your reply! I tested this approach yesterday and it worked fine. Before answering, I was going to assemble a more productive stand and investigate how large binary construction in TelemetryMetricsPrometheus.Core.Exporter
will affect memory allocation. I'll test it and return with results.
Your suggestion seems reasonable, and it is at least a good first step that can be taken soon. Closed MR in prometheus_metrics_core
, thank you again
Sounds good. I'll close this PR for now and merge in #200 in that case. Thanks!
Change description
We have registered high memory consumption by the ETSCronFlusher process heap and large number of attached ref binaries.
In the course of our experiments, abandoning the use of Exporter and placing the process in hibernate after cleanup allowed us to reduce memory consumption under load by ~70mb
What problem does this solve?
Issue number: #198
Example usage
Additional details and screenshots
For now using
telemetry_metrics_prometheus_core
from my fork cause it also requires changesChecklist