ofesseler / gluster_exporter

Gluster Exporter for Prometheus
Apache License 2.0
81 stars 57 forks source link

Incorrect volume name or error "no Volumes were given" #11

Closed yongzhang closed 7 years ago

yongzhang commented 7 years ago

Hi,

Can anyone explain why all of my volume names were "devops-registry" from metrics?

gluster_exporter version: v0.2.6 Glusterfs server version: 3.10.0

# HELP gluster_node_size_free_bytes Free bytes reported for each node on each instance. Labels are to distinguish origins
# TYPE gluster_node_size_free_bytes counter
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-alertmanager/brick",volume="devops-registry"} 1.05489092608e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-es-data0/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-es-data1/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-es-data2/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-grafana/brick",volume="devops-registry"} 1.0409398272e+10
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-influxdb/brick",volume="devops-registry"} 9.925582848e+09
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-prometheus/brick",volume="devops-registry"} 1.0427400192e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-registry/brick",volume="devops-registry"} 4.8898015232e+10
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-alertmanager/brick",volume="devops-registry"} 1.05489092608e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-es-data0/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-es-data1/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-es-data2/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-grafana/brick",volume="devops-registry"} 1.0409398272e+10
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-influxdb/brick",volume="devops-registry"} 9.925582848e+09
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-prometheus/brick",volume="devops-registry"} 1.04273997824e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-registry/brick",volume="devops-registry"} 4.8898019328e+10

here's my glusterfs vol info

Volume Name: devops-influxdb
Type: Replicate
Volume ID: 2803fc56-cdc6-469e-a57e-7982fc20023c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.100:/glusterfsvolumes/devops/devops-influxdb/brick
Brick2: 10.10.0.101:/glusterfsvolumes/devops/devops-influxdb/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

Volume Name: devops-prometheus
Type: Replicate
Volume ID: 89c44318-e975-408d-9a6c-d15e44fddd0d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.100:/glusterfsvolumes/devops/devops-prometheus/brick
Brick2: 10.10.0.101:/glusterfsvolumes/devops/devops-prometheus/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

Volume Name: devops-registry
Type: Replicate
Volume ID: 2bb07777-248d-46aa-863a-dad64a5207d0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.100:/glusterfsvolumes/devops/devops-registry/brick
Brick2: 10.10.0.101:/glusterfsvolumes/devops/devops-registry/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

error logs from syslog:

Mar 27 15:25:03 prdsh01glus01 gluster_exporter[23074]: time="2017-03-27T15:25:03+08:00" level=warning msg="no Volumes were given." source="main.go:286"
Mar 27 15:25:08 prdsh01glus01 gluster_exporter[23074]: time="2017-03-27T15:25:08+08:00" level=warning msg="no Volumes were given." source="main.go:286"
Mar 27 15:25:32 prdsh01glus01 gluster_exporter[23074]: time="2017-03-27T15:25:32+08:00" level=warning msg="no Volumes were given." source="main.go:286"
ofesseler commented 7 years ago

@hiscal2015 thanks for reporting this error, seems that gluster_node_size_free_bytes and gluster_node_size_total_bytes are affected.

The warning message is more or less a reminder, that you're implicitly querying all volumes.

yongzhang commented 7 years ago

@ofesseler Expecting v0.2.7, this is a wonderful exporter!

yongzhang commented 7 years ago

@ofesseler Thanks for fixing this. Can you upload the latest release to the "release" tab? Seems I have some issues to build... Thanks.

ofesseler commented 7 years ago

I made a new release