prometheus / node_exporter

Exporter for machine metrics
https://prometheus.io/
Apache License 2.0
10.97k stars 2.33k forks source link

Feature adjustment --rootfs.path #1889

Open KlavsKlavsen opened 3 years ago

KlavsKlavsen commented 3 years ago

--rootfs.path ONLY filters filesystem metrics IF the given path is a mount. Thats rather limiting.. I f.ex. use node_exporter as a sidecar to monitor disk usage on PVs on containers - so I mount in the PVs to /mnt/$pv-name - and would like to filter metrics to ONLY get for the ones mounted under /mnt.. That works if /mnt itself is a mount - but otherwise all metrics is just removed ?

SuperQ commented 3 years ago

This seems a bit out of scope for the node_exporter. For monitoring Kuberentes Persistent Volumes, it's recommended to use kube-state-metrics.

EDIT: In addition, there are kubelet metrics for PVs.

KlavsKlavsen commented 3 years ago

kube-state-metrics does not monitor filesystem usage - so seeing if a PV is nearing being filled is not something kube-state-metrics delivers and not something the k8s API can deliver either. cadvisor has this info (gets it from kubelet) - but that requires you to be clusteradmin - so if you are managing your own pods (and aren't clusteradmin) - you can't see this any other way (except install metricbeat and have it send stats to f.ex. kafka).

SuperQ commented 3 years ago

Sorry, this is out of scope for the node_exporter. This is not a supported use case.

KlavsKlavsen commented 3 years ago

a shame. It works fantasticly - and is the only solution to do it, with a prometheus setup when you're not clusteradmin. Only change to "make it prettier" i to fix rootfs.path to ignore if given path is a mount itself.. I don't see why that should be a requirement. Just give filesystem metrics for everything below the given rootfs.path - incl. "rootfs.path" - IF thats a mount - instead of failing to deliver any filesystem metrics if given path is not a mount ?

discordianfish commented 3 years ago

Not sure I understand.. There should be one node-exporter per node which gives you metrics for all volumes, including PVs. As a user (non-clusteradmin) you can get this data from prometheus or the node-exporter if you run your own prometheus server.

KlavsKlavsen commented 3 years ago

@discordianfish I am running on a cluster where I am NOT cluster admin - so I cannot access the PersistentVolume filesystem metrics that kubelet outputs (pretty common in larger corporations - especially for those running OpenShift).

To GET filesystem metrics for my PersistentVolumes in all my pods - without having to alter all my docker images used there - I simply added a node_exporter sidecar - which runs node_exporter - with system and diskstats enabled (giving me IO and filesystem metrics). It works beautifully, and allows me to get the metrics from my PVs filesystems (via the pods that use them) - which is then scraped by my teams prometheus instance.

roidelapluie commented 3 years ago

We do have --collector.filesystem.ignored-mount-points at your disposal, you youd use inverted query (?!/mnt).*

KlavsKlavsen commented 3 years ago

@roidelapluie That looks interesting. I'll see if I can make that work. I just liked the --rootfs.path - as it also renamed the name in the metric from /mnt/$pvname to just $pvname (with --rootfs.path=mnt) - and I don't see why node_exporter should care if the given rootfs IS a mount.. and ignoring the option because its not :(

roidelapluie commented 3 years ago

You can rename the path in metric relabel configs in prometheus if that is the concern.

KlavsKlavsen commented 3 years ago

@roidelapluie "reenable" ? you mean I can rewrite devicename in prometheus relabel - if I write rules to do it ? instead of simply making rootfs.path work - even IF the given path isn't a mount ?

roidelapluie commented 3 years ago

Yes. It seems that for 99.9% of users it would be error prone to enable that use case.