prometheus / node_exporter

Exporter for machine metrics
https://prometheus.io/
Apache License 2.0
10.99k stars 2.33k forks source link

Metric was collected before with the same name and label values #2805

Open gnanasalten opened 1 year ago

gnanasalten commented 1 year ago

Host operating system: output of uname -a

Linux dc2cpoenrvmd534 3.10.0-1160.66.1.el7.x86_64 #1 SMP Wed May 18 16:02:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

node_exporter version: output of node_exporter --version

node_exporter, version 1.5.0 (branch: HEAD, revision: 1b48970ffcf5630534fb00bb0687d73c66d1c959) build user: root@6e7732a7b81b build date: 20221129-18:59:09 go version: go1.19.3 platform: linux/amd64

node_exporter command line flags

/usr/local/bin/node_exporter --collector.systemd --collector.sockstat --collector.filefd --collector.textfile.directory=/var/lib/node_exporter/

node_exporter log output

Sep 15 02:57:37 xxxxxxxxx node_exporter: ts=2023-09-15T02:57:37.684Z caller=stdlib.go:105 level=error msg="error gathering metrics: 17 error(s) occurred:\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/boot\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/boot\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/var/log\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/var/log/audit\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/boot\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/home\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/opt\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/var\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/var\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/tmp\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/var/tmp\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/var\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/dev/shm\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/home\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/var/log\" > untyped: } was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/var/tmp\" > untyped: } was collected before with the same name and label values\n* [from Gatherer #2] collected metric \"node_fstab_mount_status\" { label:<name:\"filesystem\" value:\"/var/log/audit\" > untyped: } was collected before with the same name and label values"

Are you running node_exporter in Docker?

No

What did you do that produced an error?

Scrape from prometheus

What did you expect to see?

No error

What did you see instead?

error

discordianfish commented 1 year ago

Can you provide your /etc/fstab and /proc/mounts?

gnanasalten commented 1 year ago

Can you provide your /etc/fstab and /proc/mounts? /etc/fstab

LABEL=img-rootfs / ext4 rw,relatime 0 1
LABEL=img-boot /boot ext4 rw,relatime 0 1
LABEL=fs_var /var ext4 rw,relatime 0 2
LABEL=fs_var_tmp /var/tmp ext4 rw,nosuid,nodev,noexec,relatime 0 2
LABEL=fs_var_log /var/log ext4 rw,relatime 0 3
LABEL=var_log_aud /var/log/audit ext4 rw,relatime 0 4
LABEL=fs_home /home ext4 rw,nodev,relatime 0 2
LABEL=fs_opt /opt ext4 rw,nodev,relatime 0 2
LABEL=fs_tmp /tmp ext4 rw,nodev,nosuid,noexec,relatime 0 2
tmpfs /dev/shm tmpfs nodev,nosuid,noexec 0 0

/proc/mounts

sysfs /sys sysfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,seclabel,nosuid,size=3976556k,nr_inodes=994139,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev 0 0
devpts /dev/pts devpts rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,seclabel,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,seclabel,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,seclabel,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,seclabel,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,seclabel,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,seclabel,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,seclabel,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,seclabel,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,seclabel,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,blkio 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/mapper/ubuntu_vg-lv_root / ext4 rw,seclabel,relatime,data=ordered 0 0
selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=36,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=12112 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,seclabel,relatime 0 0
mqueue /dev/mqueue mqueue rw,seclabel,relatime 0 0
/dev/mapper/ubuntu_vg-lv_home /home ext4 rw,seclabel,nodev,relatime,data=ordered 0 0
/dev/vda1 /boot ext4 rw,seclabel,relatime,data=ordered 0 0
/dev/mapper/ubuntu_vg-lv_opt /opt ext4 rw,seclabel,nodev,relatime,data=ordered 0 0
/dev/mapper/ubuntu_vg-lv_var /var ext4 rw,seclabel,relatime,data=ordered 0 0
/dev/mapper/ubuntu_vg-lv_var_tmp /var/tmp ext4 rw,seclabel,nosuid,nodev,noexec,relatime,data=ordered 0 0
/dev/mapper/ubuntu_vg-lv_var_log /var/log ext4 rw,seclabel,relatime,data=ordered 0 0
/dev/mapper/ubuntu_vg-lv_var_log_audit /var/log/audit ext4 rw,seclabel,relatime,data=ordered 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
/dev/mapper/ubuntu_vg-lv_tmp /tmp ext4 rw,seclabel,nodev,relatime,data=ordered 0 0
tmpfs /run/user/1000 tmpfs rw,seclabel,nosuid,nodev,relatime,size=800892k,mode=700,uid=1000,gid=1000 0 0
gnanasalten commented 1 year ago

@discordianfish can you please help on this ?

dongjiang1989 commented 11 months ago

@gnanasalten Can you priovide your textfile in /var/lib/node_exporter/ ?

Maybe like: https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/blob/master/fstab-check.sh

gnanasalten commented 11 months ago

node_fstab_mount_status{filesystem="/"} 1 node_fstab_mount_status{filesystem="/boot/efi"} 1 node_fstab_mount_status{filesystem="/home"} 1 node_fstab_mount_status{filesystem="/opt"} 1 node_fstab_mount_status{filesystem="/tmp"} 1 node_fstab_mount_status{filesystem="/var"} 1 node_fstab_mount_status{filesystem="/var/log"} 1 node_fstab_mount_status{filesystem="/var/log/audit"} 1 node_fstab_mount_status{filesystem="/var/tmp"} 1 node_fstab_mount_status{filesystem="/dev/shm"} 1 node_syslog_err_count 0 node_syslog_bad_block_count 0

dongjiang1989 commented 11 months ago

Different versions of the fstab collection plugin maybe used.

@gnanasalten Can you priovide your textfile script?

if you use fstab-check.sh script, mountpoint will appear in the tag.

node_fstab_mount_status{mountpoint='xxx'} 1
SuperSandro2000 commented 5 months ago

node_exporter shouldn't fail that loudly when two mountpoints have the same path. That is a totally valid to do on linux.

discordianfish commented 5 months ago

@SuperSandro2000 It should not but I don't know if this is what is going on here

jerviscui commented 5 months ago

A similar problem happened to me. I am using wsl2 and starting systemd.

node_exporter starts and issues the following error:

ts=2024-04-19T05:09:48.730Z caller=stdlib.go:105 level=error msg="error gathering metrics: 6 error(s) occurred:
* [from Gatherer #2] collected metric \"node_filesystem_device_error\" { label:{name:\"device\"  value:\"none\"}  label:{name:\"fstype\"  value:\"tmpfs\"}  label:{name:\"mountpoint\"  value:\"/run/desktop/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu-20.04/8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1/run/user\"}  gauge:{value:1}} was collected before with the same name and label values
* [from Gatherer #2] collected metric \"node_filesystem_device_error\" { label:{name:\"device\"  value:\"none\"}  label:{name:\"fstype\"  value:\"tmpfs\"}  label:{name:\"mountpoint\"  value:\"/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu-20.04/8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1/run/user\"}  gauge:{value:1}} was collected before with the same name and label values
* [from Gatherer #2] collected metric \"node_filesystem_device_error\" { label:{name:\"device\"  value:\"none\"}  label:{name:\"fstype\"  value:\"tmpfs\"}  label:{name:\"mountpoint\"  value:\"/parent-distro/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu-20.04/8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1/run/user\"}  gauge:{value:1}} was collected before with the same name and label values
* [from Gatherer #2] collected metric \"node_filesystem_readonly\" { label:{name:\"device\"  value:\"none\"}  label:{name:\"fstype\"  value:\"tmpfs\"}  label:{name:\"mountpoint\"  value:\"/run/desktop/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu-20.04/8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1/run/user\"}  gauge:{value:0}} was collected before with the same name and label values
* [from Gatherer #2] collected metric \"node_filesystem_readonly\" { label:{name:\"device\"  value:\"none\"}  label:{name:\"fstype\"  value:\"tmpfs\"}  label:{name:\"mountpoint\"  value:\"/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu-20.04/8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1/run/user\"}  gauge:{value:0}} was collected before with the same name and label values
* [from Gatherer #2] collected metric \"node_filesystem_readonly\" { label:{name:\"device\"  value:\"none\"}  label:{name:\"fstype\"  value:\"tmpfs\"}  label:{name:\"mountpoint\"  value:\"/parent-distro/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu-20.04/8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1/run/user\"}  gauge:{value:0}} was collected before with the same name and label values"
jerviscui commented 5 months ago

And no such directory exists on my system

/run/desktop/... /parent-distro/... /mnt/host/...

discordianfish commented 4 months ago

Hrm label:{name:\"device\" value:\"none\"} looks suspicious. Is there anything else in the log that would point to issue tretrieving the device? @SuperQ any ideas?

cmg1986 commented 1 month ago

I am also facing same issue, running node_exporter as docker container using command -

docker run -d --net="host" --pid="host" -v "/:/host:ro,rslave" -v "/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:ro" quay.io/prometheus/node-exporter:latest --path.rootfs=/host --collector.systemd --collector.tcpstat --collector.meminfo_numa

Error -

ts=2024-08-07T17:29:22.159Z caller=stdlib.go:105 level=error msg="error gathering metrics: 7 error(s) occurred:\n* [from Gatherer #2] collected metric \"node_filesystem_device_error\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:0}} was collected before with the same name and label values\n* [from Gatherer #2] collected metric \"node_filesystem_readonly\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:0}} was collected before with the same name and label values\n* [from Gatherer #2] collected metric \"node_filesystem_size_bytes\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:9.68421376e+08}} wascollected before with the same name and label values\n* [from Gatherer #2] collected metric \"node_filesystem_free_bytes\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:9.68421376e+08}} was collected before with the same name and label values\n* [from Gatherer #2] collected metric \"node_filesystem_avail_bytes\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:9.68421376e+08}} was collected before with the same name and label values\n* [from Gatherer #2] collected metric \"node_filesystem_files\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\"value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:236431}} was collected before with the same name and label values\n* [from Gatherer #2] collected metric \"node_filesystem_files_free\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:236430}} was collected before with the same name and label values"

gnanasalten commented 1 month ago

This could be because of time difference.

On Wed, 7 Aug, 2024, 23:15 Chandra M., @.***> wrote:

I am also facing same issue, running node_exporter as docker container using command -

docker run -d --net="host" --pid="host" -v "/:/host:ro,rslave" -v "/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:ro" quay.io/prometheus/node-exporter:latest http://quay.io/prometheus/node-exporter:latest --path.rootfs=/host --collector.systemd --collector.tcpstat --collector.meminfo_numa Error -

ts=2024-08-07T17:29:22.159Z caller=stdlib.go:105 level=error msg="error gathering metrics: 7 error(s) occurred:\n [from Gatherer #2] collected metric \"node_filesystem_device_error\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:0}} was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_filesystem_readonly\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:0}} was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_filesystem_size_bytes\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:9.68421376e+08}} wascollected before with the same name and label values\n [from Gatherer #2] collected metric \"node_filesystem_free_bytes\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:9.68421376e+08}} was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_filesystem_avail_bytes\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:9.68421376e+08}} was collected before with the same name and label values\n [from Gatherer #2] collected metric \"node_filesystem_files\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\"value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:236431}} was collected before with the same name and label values\n* [from Gatherer #2] collected metric \"node_filesystem_files_free\" { label:{name:\"device\" value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"} gauge:{value:236430}} was collected before with the same name and label values"

— Reply to this email directly, view it on GitHub https://github.com/prometheus/node_exporter/issues/2805#issuecomment-2274001730, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQRBWJAVGUVQALAMSNIRI4TZQJMNBAVCNFSM6AAAAAA4ZO3PPCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZUGAYDCNZTGA . You are receiving this because you were mentioned.Message ID: @.***>

cmg1986 commented 1 month ago

@gnanasalten Time is same on both host machine and docker container OR you pointing out to some other time difference ?