Open shpokas opened 1 month ago
This patch suits my needs:
--- mdadm.orig 2024-05-31 08:52:52.153777132 +0300
+++ mdadm 2024-05-31 11:25:44.772243743 +0300
@@ -40,6 +40,10 @@
[[ "${mdadmArray}" =~ '/dev/md'[[:digit:]]+'p' ]] && continue
mdadmName="$(basename "$(realpath "${mdadmArray}")")"
+
+ # Ignore inactive arrays
+ [[ $(grep "^${mdadmName}" /proc/mdstat) =~ 'inactive' ]] && continue
+
mdadmSysDev="/sys/block/${mdadmName}"
degraded=$(maybe_get "${mdadmSysDev}/md/degraded")
@shpokas Thanks. Tested that and does not appear to break anything. What does /proc/mdstat
look like there?
There are likely some other things that need cleaned up as well.
Derp! Sorry, missed that. You did. Thanks!
@shpokas
Currently pondering best way to handle inactive arrays.
ls /sys/block/md0/slaves/
ls -l /sys/block/md0/
cat /sys/block/md0/md/level
cat /sys/block/md0/md/raid_disks
Sorry, those commands should be for md127
.
Currently thinking of just adding a counter for inactive.
Ups, looks like I opened another issue in the wrong place
Now creating another one here, please feel free to close a duplicate.
The problem
I am trying to setup mdadm application monitoring. mdadm script fails with /etc/snmp/mdadm /etc/snmp/mdadm: line 53: (2 - ): syntax error: operand expected (error token is ")")
debug run of the same script is attached. mdadm-debug-run.log
Monitored host has four disks in two software raid arrays. Two SATA 512GB SSDs are configured in RAID1 by Intel Matrix Storage RAID in EFI. Two NVMe 1TB disks are configured in RAID1 by operating system.
mdadm configuration, lshw output, lsblk output is added below. Thanks for looking into this.
cat /etc/mdadm.conf
cat /proc/mdstat
mdadm --examine /dev/md/imsm
mdadm --examine /dev/md/0
mdadm --examine /dev/md/Volume0
lshw -class disk -class storage
lsblk