Open anthonyra opened 7 months ago
Maybe it is too late for you, but...
The only thing you have to do is to specify the host:port of the exporter to scrap it:
relabeling is only required if you have "pollers" like for blackbox that act as central point for scraping.
scrape_configs:
- job_name: node
proxy_url: http://proxy:8080/
static_configs:
- targets: ['client:9100'] # Presuming the FQDN of the client is "client" and exporter is listening on port 9101.
- targets: ['win_client:9182'].
- job_name: job2
proxy_url: http://proxy:8080/
static_configs:
- targets: ['client:port'] # Presuming the FQDN of the client is "client".
this acts like in cmd shell:
curl -v --proxy http://proxy:8080 \
-H "X-Prometheus-Scrape-Timeout-Seconds: 5" \
http://client:9100/metrics
curl -v --proxy http://proxy:8080 \
-H "X-Prometheus-Scrape-Timeout-Seconds: 5" \
http://win_client:9182/metrics
I've looked around to see if there was documentation on this but couldn't find anything concrete with regards to pushprox. One of the suggested approaches would be to use
relabel_config
which drops everything that doesn't matchsource_labels
... however I'm not sure if the labels is being returned when using thehttp_sd_configs
approach.Each metric collector will most likely need to be a separate job (to ensure that all targets have the specific metrics for collection) but would it make sense to use
relabel_config
or maybe implement a filter query parameter on thehttp://localhost:8080/clients
endpoint.