scross01 / prometheus-klipper-exporter

Prometheus Exporter for Klipper
MIT License
51 stars 12 forks source link

Index out of range #35

Closed morkster closed 1 week ago

morkster commented 3 months ago

When I run both the executable or the docker, I get the below error messages on my node:

INFO[0002] Collecting process_stats for 192.168.7.42:7125 panic: runtime error: index out of range [-1]

goroutine 22 [running]: github.com/scross01/prometheus-klipper-exporter/collector.Collector.Collect({{0x91c0a0, 0xc0000dea00}, {0xc0000ce1f8, 0x11}, {0xc000108f00, 0x7, 0x8}, {0x0, 0x0}}, 0xc0000bc7e0) /root/klipper-exporter/prometheus-klipper-exporter/collector/collector.go:61 +0x612e github.com/prometheus/client_golang/prometheus.(Registry).Gather.func1() /root/go/pkg/mod/github.com/prometheus/client_golang@v1.19.1/prometheus/registry.go:455 +0x105 created by github.com/prometheus/client_golang/prometheus.(Registry).Gather in goroutine 9 /root/go/pkg/mod/github.com/prometheus/client_golang@v1.19.1/prometheus/registry.go:466 +0x568

Any idea what I should do to resolve this?

scross01 commented 3 months ago

Similar to the other issue, can you run the following to get the raw API response

curl http://klipperHost/machine/proc_stats

From the error it seem like is getting an empty result.

morkster commented 3 months ago

@.:~# curl http://192.168.7.42/machine/proc_stats {"result": {"moonraker_stats": [{"time": 1723882308.401406, "cpu_usage": 25.42, "memory": 54172, "mem_units": "kB"}, {"time": 1723882309.4041357, "cpu_usage": 2.08, "memory": 54172, "mem_units": "kB"}, {"time": 1723882310.412219, "cpu_usage": 3.67, "memory": 54172, "mem_units": "kB"}, {"time": 1723882311.4068599, "cpu_usage": 8.95, "memory": 54172, "mem_units": "kB"}, {"time": 1723882312.4073603, "cpu_usage": 2.17, "memory": 54172, "mem_units": "kB"}, {"time": 1723882313.40858, "cpu_usage": 21.52, "memory": 54172, "mem_units": "kB"}, {"time": 1723882314.4101276, "cpu_usage": 2.79, "memory": 54172, "mem_units": "kB"}, {"time": 1723882315.4188983, "cpu_usage": 3.39, "memory": 54172, "mem_units": "kB"}, {"time": 1723882316.4208665, "cpu_usage": 5.41, "memory": 54172, "mem_units": "kB"}, {"time": 1723882317.4218333, "cpu_usage": 4.73, "memory": 54172, "mem_units": "kB"}, {"time": 1723882318.4161983, "cpu_usage": 27.23, "memory": 54172, "mem_units": "kB"}, {"time": 1723882319.4172838, "cpu_usage": 2.36, "memory": 54172, "mem_units": "kB"}, {"time": 1723882320.4269888, "cpu_usage": 3.81, "memory": 54172, "mem_units": "kB"}, {"time": 1723882321.4221637, "cpu_usage": 6.31, "memory": 54172, "mem_units": "kB"}, {"time": 1723882322.4230063, "cpu_usage": 2.18, "memory": 54172, "mem_units": "kB"}, {"time": 1723882323.6087532, "cpu_usage": 27.13, "memory": 54172, "mem_units": "kB"}, {"time": 1723882324.5933626, "cpu_usage": 4.72, "memory": 54172, "mem_units": "kB"}, {"time": 1723882325.5945354, "cpu_usage": 2.12, "memory": 54172, "mem_units": "kB"}, {"time": 1723882326.6028917, "cpu_usage": 3.82, "memory": 54172, "mem_units": "kB"}, {"time": 1723882327.6439235, "cpu_usage": 22.51, "memory": 54172, "mem_units": "kB"}, {"time": 1723882328.6426048, "cpu_usage": 5.13, "memory": 54172, "mem_units": "kB"}, {"time": 1723882329.6506248, "cpu_usage": 8.29, "memory": 54172, "mem_units": "kB"}, {"time": 1723882330.6502173, "cpu_usage": 4.67, "memory": 54172, "mem_units": "kB"}, {"time": 1723882331.651526, "cpu_usage": 6.27, "memory": 54172, "mem_units": "kB"}, {"time": 1723882332.661491, "cpu_usage": 22.19, "memory": 54172, "mem_units": "kB"}, {"time": 1723882333.659351, "cpu_usage": 5.33, "memory": 54172, "mem_units": "kB"}, {"time": 1723882334.669937, "cpu_usage": 8.95, "memory": 54172, "mem_units": "kB"}, {"time": 1723882335.6696138, "cpu_usage": 4.91, "memory": 54172, "mem_units": "kB"}, {"time": 1723882336.6722841, "cpu_usage": 4.67, "memory": 54172, "mem_units": "kB"}, {"time": 1723882337.6650178, "cpu_usage": 25.92, "memory": 54172, "mem_units": "kB"}], "throttled_state": null, "cpu_temp": 51.875, "network": {"wlan0": {"rx_bytes": 371098311, "tx_bytes": 2153727460, "rx_packets": 2953777, "tx_packets": 2515460, "rx_errs": 0, "tx_errs": 0, "rx_drop": 21337, "tx_drop": 0, "bandwidth": 12520.93}, "lo": {"rx_bytes": 2275986218, "tx_bytes": 2275986218, "rx_packets": 3466679, "tx_packets": 3466679, "rx_errs": 0, "tx_errs": 0, "rx_drop": 0, "tx_drop": 0, "bandwidth": 12845.77}, "eth0": {"rx_bytes": 0, "tx_bytes": 0, "rx_packets": 0, "tx_packets": 0, "rx_errs": 0, "tx_errs": 0, "rx_drop": 0, "tx_drop": 0, "bandwidth": 0.0}, "can0": {"rx_bytes": 9482663, "tx_bytes": 488418, "rx_packets": 1260802, "tx_packets": 81428, "rx_errs": 0, "tx_errs": 0, "rx_drop": 318, "tx_drop": 0, "bandwidth": 111.95}}, "system_cpu_usage": {"cpu": 10.16, "cpu0": 6.52, "cpu1": 3.09, "cpu2": 7.14, "cpu3": 25.51}, "system_uptime": 79982.709825442, "system_memory": {"total": 2029724, "available": 1661724, "used": 368000}, "websocket_connections": @.:~#

Op za 17 aug 2024 om 02:28 schreef Stephen Cross @.***>:

Similar to the other issue, can you run the following to get the raw API response

curl http://klipperHost/machine/proc_stats

From the error it seem like is getting an empty result.

— Reply to this email directly, view it on GitHub https://github.com/scross01/prometheus-klipper-exporter/issues/35#issuecomment-2294496840, or unsubscribe https://github.com/notifications/unsubscribe-auth/AYX4DKT24RYQFMNAVBDPEB3ZR2KLLAVCNFSM6AAAAABMTXMJZOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJUGQ4TMOBUGA . You are receiving this because you authored the thread.Message ID: @.***>

-- Met vriendelijke groet,

Best regards,

Mark Kroes

Cell +31 (0) 6 244 44 883 E-mail @.***

All information contained in this e-mail message may be confidential and is intended to be exclusively for the addressee. Should you receive this e-mail message unintentionally, please do not use the contents herein and notify the sender immediately by return e-mail. Thank you.

scross01 commented 3 months ago

Thanks, this is odd, the direct API call is returning the expected result, but when your running the collector it appears to be getting an empty result. I'll need to add some more checks in the code to catch and report the error in more detail.

scross01 commented 1 week ago

Added improved logging to release v0.12.0