in-famous-raccoon / proxmox-snmp

SNMP Scripts to monitor Proxmox with PRTG
GNU General Public License v3.0
54 stars 12 forks source link

add ceph support #2

Closed JBlond closed 2 years ago

JBlond commented 2 years ago

It would be great to have ceph support

root@pve-01:~# ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE   VAR   PGS  STATUS
 0    hdd  6.00380   1.00000  6.0 TiB  930 GiB  370 GiB   50 MiB  2.3 GiB  5.1 TiB  15.13  1.00  127      up
 1    hdd  6.00380   1.00000  6.0 TiB  920 GiB  360 GiB   47 MiB  2.2 GiB  5.1 TiB  14.97  0.98  122      up
 2    hdd  6.00380   1.00000  6.0 TiB  974 GiB  414 GiB   53 MiB  2.4 GiB  5.1 TiB  15.84  1.04  141      up
 3    hdd  6.00380   1.00000  6.0 TiB  908 GiB  348 GiB   73 MiB  2.4 GiB  5.1 TiB  14.77  0.97  120      up
 4    hdd  6.00380   1.00000  6.0 TiB  953 GiB  393 GiB   55 MiB  2.5 GiB  5.1 TiB  15.49  1.02  135      up
 5    hdd  6.00380   1.00000  6.0 TiB  943 GiB  383 GiB   53 MiB  2.3 GiB  5.1 TiB  15.34  1.01  131      up
 6    hdd  6.00380   1.00000  6.0 TiB  957 GiB  397 GiB   52 MiB  2.5 GiB  5.1 TiB  15.57  1.02  136      up
 7    hdd  6.00380   1.00000  6.0 TiB  891 GiB  331 GiB   46 MiB  2.0 GiB  5.1 TiB  14.49  0.95  113      up
 8    hdd  6.00380   1.00000  6.0 TiB  903 GiB  343 GiB   72 MiB  2.0 GiB  5.1 TiB  14.68  0.97  119      up
 9    hdd  6.00380   1.00000  6.0 TiB  944 GiB  384 GiB   51 MiB  2.5 GiB  5.1 TiB  15.35  1.01  131      up
10    hdd  6.00380   1.00000  6.0 TiB  961 GiB  401 GiB   54 MiB  2.4 GiB  5.1 TiB  15.63  1.03  137      up
11    hdd  6.00380   1.00000  6.0 TiB  880 GiB  320 GiB   45 MiB  2.0 GiB  5.1 TiB  14.32  0.94  109      up
12    hdd  6.00380   1.00000  6.0 TiB  985 GiB  425 GiB   59 MiB  2.5 GiB  5.0 TiB  16.01  1.05  145      up
13    hdd  6.00380   1.00000  6.0 TiB  925 GiB  365 GiB   51 MiB  2.2 GiB  5.1 TiB  15.05  0.99  126      up
14    hdd  6.00380   1.00000  6.0 TiB  923 GiB  363 GiB   46 MiB  2.1 GiB  5.1 TiB  15.02  0.99  124      up
15    hdd  6.00380   1.00000  6.0 TiB  955 GiB  395 GiB   53 MiB  2.3 GiB  5.1 TiB  15.53  1.02  134      up
16    hdd  6.00380   1.00000  6.0 TiB  897 GiB  337 GiB   47 MiB  2.1 GiB  5.1 TiB  14.59  0.96  116      up
17    hdd  6.00380   1.00000  6.0 TiB  936 GiB  376 GiB   48 MiB  2.3 GiB  5.1 TiB  15.22  1.00  128      up
18    hdd  6.00380   1.00000  6.0 TiB  981 GiB  421 GiB   53 MiB  2.4 GiB  5.0 TiB  15.96  1.05  143      up
19    hdd  6.00380   1.00000  6.0 TiB  901 GiB  341 GiB   47 MiB  2.1 GiB  5.1 TiB  14.65  0.96  117      up
20    hdd  6.00380   1.00000  6.0 TiB  939 GiB  379 GiB   73 MiB  2.3 GiB  5.1 TiB  15.27  1.00  131      up
21    hdd  6.00380   1.00000  6.0 TiB  939 GiB  379 GiB   53 MiB  2.3 GiB  5.1 TiB  15.27  1.01  130      up
22    hdd  6.00380   1.00000  6.0 TiB  928 GiB  368 GiB   49 MiB  2.3 GiB  5.1 TiB  15.09  0.99  126      up
23    hdd  6.00380   1.00000  6.0 TiB  954 GiB  394 GiB   52 MiB  2.3 GiB  5.1 TiB  15.51  1.02  134      up
                       TOTAL  144 TiB   22 TiB  8.8 TiB  1.3 GiB   55 GiB  122 TiB  15.20
MIN/MAX VAR: 0.94/1.05  STDDEV: 0.45
root@pve-01:~#
in-famous-raccoon commented 2 years ago

Sorry also don't use ceph but here you could try: ceph osd df | grep "TOTAL" | awk '{print $14}' This should output the %USE from the TOTAL Row.

JBlond commented 2 years ago

Yepp that works

root@pve-01:~# ceph osd df | grep "TOTAL" | awk '{print $14}'
15.62
root@pve-01:~#
JBlond commented 2 years ago

What may be also worth parsing is

root@pve-01:~# ceph pg stat
1025 pgs: 1025 active+clean; 3.1 TiB data, 23 TiB used, 121 TiB / 144 TiB avail; 462 KiB/s rd, 9.5 MiB/s wr, 588 op/s
root@pve-01:~#
ceph pg stat | awk  '{print $8}'
in-famous-raccoon commented 2 years ago

Thanks for testing, added the Script.