Open chanlists opened 7 years ago
I think the problem come from "Free" disks. From the sources:
assume critical if Usage is not one of:
# - existing Array name # - HotSpare # - Rebuild
Indeed, I have a different output from "cli64 vsf info":
[root@host ~]# cli64 rsf info
# Name Disks Total Free State
===============================================================================
1 Raid Set # 000 8 4800.0GB 0.0GB Normal
===============================================================================
GuiErrMsg<0x00>: Success.
The MinDiskCap colomn doesn't exist, so I need to patch check_raid.pl
*** check_raid.pl.ori 2017-05-26 02:02:36.000000000 +0200
--- check_raid.pl 2017-05-29 14:16:27.797695357 +0200
*************** $fatpacked{"App/Monitoring/Plugin/CheckR
*** 1374,1380 ****
\s+\d+ # Disks
\s+\S+ # TotalCap
\s+\S+ # FreeCap
- \s+\S+ # MinDiskCap/DiskChannels
\s+(\S+)\s* # State
$}x);
--- 1374,1379 ----
Ah, I see. Yes, I have checked that if I convert the free drives to hot spares, it does not complain. On the other hand, shouldn't "Free" be an acceptable state for a drive do be in? If I patch the plugin like this, it won't complain about free drives anymore...
} elsif ($usage =~ /HotSpare/) {
# hotspare is OK
push(@{$drivestatus{$array_name}}, $id);
+ } elsif ($usage =~ /Free/) {
+ # Free is OK
+ push(@{$drivestatus{$array_name}}, $id);
} elsif ($usage =~ /Pass Through/) {
# Pass Through is OK
push(@{$drivestatus{$array_name}}, $id);
Would this be acceptable? Cheers,
Christian
Hi,
it seems that on my system the plugin produces a false CRITICAL with an areca controller:
Output of
check_raid -d
:Output of each command from
check_raid -d
/usr/local/sbin/areca-cli rsf info
/usr/local/sbin/areca-cli disk info
Additional environment details:
Using master. Thanks for looking into this,
Christian