Closed rebelsowl closed 10 months ago
@rebelsowl does the leak also occur on v0.8.3?
In v0.8.0 it occured. Now I am trying on v0.8.3, I will let you know in a few hours
Thank you! If you can identify the version in which the leak began that would save a lot of time.
Another useful piece of data would be to go to <host>:9182/debug/pprof/heap
, save the output file and share it here. That will give us counters on what memory is in use and where there are many allocations.
I have been running v8.3.0 for 4 hours.It looks like there is no memory problem.
I will run this version for a while after that I will switch to v9.0 then upload heap file. This is heap file for v8.0.3
Thanks @rebelsowl! The 0.8.3 dump looks fine, as you say. Nothing unexpected. Let me know when you have one for 0.9 and I can compare.
Hello @carlpett , I have installed v9 back on my local computer but I couldn't reproduce leak for 4-5 days, then started v9.0 on server, it's memory usage looks fine too.
Here is heap from server: heap_prod.gz
Hi,
Same issue here. It appears than when vmware collector is enabled, the scraping always timed out. I even tried a localhost browser http://localhost:9100/metrics -> timed out. I have to uninstall with msiexec and then reinstall without vmware (it's mssql server vm on vmware hypervisor with vmware agent installed). This morning the wmi_exporter cosummed about 650MB on memory and do not respond. I have to uninstall and reinstall without vmware collector.
ENABLED_COLLECTORS=mssql,process,vmware,os,cpu,cs,logical_disk,net,service,system,tcp -> scraping timed out
ENABLED_COLLECTORS=mssql,process,os,cpu,cs,logical_disk,net,service,system,tcp -> no issue.
@carlpett I'v got the same issue, on one server wmi_exporter consumes 3GB ram.
Running settings:
"C:\Program Files\wmi_exporter\wmi_exporter.exe" --log.format logger:eventlog?name=wmi_exporter --collectors.enabled cpu,cs,iis,logical_disk,memory,net,os,system,tcp,vmware,textfile,netframework_clrexceptions,netframework_clrlocksandthreads,netframework_clrmemory,netframework_clrjit,service,process --telemetry.addr :9182 --collector.textfile.directory C:\custom_metrics\ --collector.process.processes-where="Name LIKE 'w3wp.exe' OR Name LIKE 'Fortis.%'" --collector.service.services-where="Name LIKE '%BackgroundJobs%'"
I also have healthy node with the same runnning settings, but it has much more metrics. metrics_healthy.zip metrics_sick.zip
Thanks @AntonSmolkov! Could you also upload the goroutine dump? You can find it here: <host>:9182/debug/pprof/goroutine
It looks like it could be related to #446
Unfortunately, can't do it yet. This is production server, so i had to disable vmware collector and restart wmi_exporter yesterday. After14 hours memory consumption is still ok (30MB).
BTW, i think it worth mentioning that this server has been under high CPU load for a while. May be it somehow caused timeouts while gatherings data from wmi.
This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.
I am using V0.9.0 as a service, running on default configuration. In a remote server it was using 3GB RAM. Now i tried that version on my local computer it has been running for 2 hours and it is using 61MB RAM. Do we have a solution for this?