Diamond is a python daemon that collects system metrics and publishes them to Graphite (and others). It is capable of collecting cpu, memory, network, i/o, load and disk metrics. Additionally, it features an API for implementing custom collectors for gathering metrics from almost any source.
I just fought with DiskSpaceCollector because it didn't report the disk usage for a fuse.glusterfs mounted directory.
The first problem (trivial) was that in the default config "gluster" is given instead of "fuse.gluster" . No problem, corrected.
But even after correction the metrics were silently dropped.
I found that the if at diskspace.py:153 assumes the "device" starts with a '/'. Too bad, usually mountpoints for gluster use the format
srv1[,srv2]:volume_name
It's possible to use
/srv1[,srv2]:volume_name
(with a leading '/') but I think it's quite uncommon.
I made that if a no-op (adding as first condition "(1==1) or" ), but I think it could be better to consider the GlusterFS case and/or give a meaningful message stating why that mountpoint is getting discarded.
Is this still occurring with master? By reading the collector code, it looks like it should be working. If not, can you give sanitized output of /proc/mounts?
Hi.
I just fought with DiskSpaceCollector because it didn't report the disk usage for a fuse.glusterfs mounted directory.
The first problem (trivial) was that in the default config "gluster" is given instead of "fuse.gluster" . No problem, corrected.
But even after correction the metrics were silently dropped. I found that the if at diskspace.py:153 assumes the "device" starts with a '/'. Too bad, usually mountpoints for gluster use the format srv1[,srv2]:volume_name
It's possible to use /srv1[,srv2]:volume_name (with a leading '/') but I think it's quite uncommon. I made that if a no-op (adding as first condition "(1==1) or" ), but I think it could be better to consider the GlusterFS case and/or give a meaningful message stating why that mountpoint is getting discarded.
HIH