Open cecilwei opened 5 years ago
could you try with this instead: command: "-varz http://127.0.0.1:8222"
for the exporter?
could you try with this instead:
command: "-varz http://127.0.0.1:8222"
for the exporter?
thanks for your reply. I tried but it didn't help. Could it be possible that it is running in docker environment?
maybe you need to add some exposed ports to the config so that it is reachable among the containers
the same issue @cecilwei , did you manage to solve this problem? thanks
You could try exposing the port, e.g.
expose:
- "7777"
@ColinSullivan1 thank you for answer I did it but prometheus-nats-exporter still doesn't see metrics from NATS
Hi,
I have both NATS server 2.0.0 and exporter 0.4.0 running on the same server. However, the metrics reported from exporter doesn't seem to have variables with prefix 'gnatsd_varz' and thus the dashboard shows nothing.
The following is my docker compose setting. Is there anything I missed? Any suggestion is appreciated.
Hello, I think you should use something like this:
nats-exporter:
image: synadia/prometheus-nats-exporter:0.6.2
restart: unless-stopped
command: "-connz -varz -channelz -serverz -subz http://127.0.0.1:8222"
ports:
- 127.0.0.1:7777:7777
Same problem here.
I faced the same problem where if nats took a little more time than the nats exporter starts to ping. we never saw the nats metrics again. Then I found this error in logs - [ERR] Error loading metric config from response: Get "http://localhost:8222/routez": dial tcp [::1]:8222: connect: cannot assign requested address I think the problem is here in this line of code.it never retries for any other error except error of "connection refused" https://github.com/nats-io/prometheus-nats-exporter/blob/master/collector/collector.go#L244
A workaround of delaying nats exporter pod till nats are reachable fixed for me. I think adding another check for "cannot assign requested address" error should fix it.
Same problem here
`nats-streaming: image: nats-streaming:latest container_name: nats-streaming hostname: nats restart: unless-stopped ports:
Getting the same metrics out as the OP
I know thats post from long time ago but maybe ill help if someone still looks for answer
services:
n1.example.net:
container_name: n1
image: nats:latest
entrypoint: /nats-server
command: --name N1 --js --debug --trace --sd /data -p 4222 -m 8222
networks:
- test_network
ports:
- 4222:4222
- 6222:6222
- 8222:8222
volumes:
- ./jetstream-cluster/n1:/data
prometheus-nats-exporter:
image: synadia/prometheus-nats-exporter
hostname: prometheus-nats-exporter
command: "-connz -varz -channelz -serverz -subz http://host.docker.internal:8222"
ports:
- "7777:7777"
networks:
- test_network
this compose config fix the problem of missing metrics. The problem is somehow that inside docker enviroment the exporter dont have access to host machine exposed ports. You need to use either http://host.docker.internal or address with container name for e.g. http://n1:8222 in my example.
@nozbieg thanks for the update, btw we have changed from using synadia/prometheus-nats-exporter
some time ago so the new images would be found at natsio/prometheus-nats-exporter
org: https://hub.docker.com/layers/natsio/prometheus-nats-exporter/0.12.0/images/sha256-83e157c6f2b2c8c29abb4171d6b99bb9b2a733fc158afffbb388e671de95da5c?context=explore
Yeah I'm just working with right now and changed it too just moments ago. That synidia image had some problems with -jsz
Hi,
I have both NATS server 2.0.0 and exporter 0.4.0 running on the same server. However, the metrics reported from exporter doesn't seem to have variables with prefix 'gnatsd_varz' and thus the dashboard shows nothing.
The following is my docker compose setting. Is there anything I missed? Any suggestion is appreciated. varz.json.txt metrics.txt