Closed ATLJLawrie closed 6 years ago
Did you check GLOBAL__SCRAPE_INTERVAL
from http://monitor.dockerflow.com/config/? I think that might be what you're looking for.
Sorry for clarity. That is what I have done for now to resolve the issue. This was more to open an issue as a possible enhancement in which the individual target configs scrape interval and timeout could be set in the same manner with labels that port and endpoint are.
It's done and available in the release 18.02.10-50. Please try it out and let me know if it works as you expected.
Finally got a chance to test this on 18.02.21-52 and it's working great.
Can anyone test if this feature is working properly?
Wasn't sure if this needed to go here or with the swarm-listener.
With the size of our cluster, using a global config of 30sec scrapes with 10 second timeouts, cadvisor can't seem to keep up with Prometheus. Even if we turn off all disable-able metrics via '-disable_metrics=tcp,udp,disk,network' the /metrics endpoint is about 2 MB. If I change my global to 60 / 30 things seem ok but it would be nice to do it more granularly.
What is your opinion or supporting something like com.df.scrapeInterval com.df.scrapeTimeout