Closed golden-oldie closed 4 years ago
That is interesting. Do I see correctly that there is only 1 Proxmox node (not a cluster)?
I've meant to test how the script behaves against a single node, instead of a cluster, but didn't get around to it yet. I'll aim to do so in the next couple of days.
What versions of Zabbix and Proxmox are you running?
It's a single node that I'm starting out with. I'm using Proxmox 6.2.6 and Zabbix 5.0.0 appliance. Thanks for looking into this!
The error is misleading and probably inconsistent between sending discovery and monitoring data. However I have tested with Python 3.6.8, Zabbix 5.0.1, single Proxmox 6.1-7 node and the script should work for you. The only exception is that the cluster won't report quorate as there is no cluster.
The error you are seeing when sending discovery data is because zabbix_sender exits with a non-zero return value. That happens when the target (proxmox.local in your case) does not exist as a monitored host in zabbix.
The first thing to test is if zabbix_sender is working correctly by trying the following on the host where the script is going to run. This should return processed 1 and failed 0:
[root@zabbix ~]# /usr/bin/zabbix_sender -v -c /etc/zabbix/zabbix_agentd.conf -s proxmox.tokyo.prod -k promox.cluster.quorate -o 1
Response from "127.0.0.1:10051": "processed: 1; failed: 0; total: 1; seconds spent: 0.000036"
sent: 1; skipped: 0; total: 1
The value for the -s parameter is the host you configured in zabbix to receive the data and attached the template to. That is the same value you would use for the -t parameter with the script. (please note that the key value of the -k parameter is indeed promox.cluster.quorate an unfortunate typo but that is a different story).
Problem solved! Turns out I had to use a Capital in the Proxmox name as that's what I had configured on the server. Doh! I've now got all the stats from Proxmox coming into the system no problem. I just have one question and I can't seem to find an easy way to do this...how can I get the stats from all the VM's / containers for memory / CPU / disk usage into Zabbix without installing an agent onto every device? Just the stats that show up in the zabbix webconsole would be perfect.
Great, that is good to hear.
Not sure what is currently out there that will report the guest statistics to Zabbix. Guess it would be fairly similar to what this script is doing. However you'd only get a limited picture from the hypervisor point of view. Stabbing in the dark I would say that installing the zabbix agent on guests would be the most common scenario.
Anyway, I'll close this issue as solved for now.
Hi takala-jp,
Good work on the code so far!
I seem to be having a very strange issue with the script in that it seems to be stripping spaces and also inserting other formatting characters into the command it's trying to run. I've started looking at the Python code but I'm no expert.
Is this something specific to my versions? I'm on Python 3.6.8. Output is below:
[root@appliance ~]# /usr/bin/python3 /usr/lib/zabbix/bin/proxmox_cluster.py -a 192.168.0.173 -u zabbix@pve -p XXX -t proxmox.local -d -v {"data": [{"{#NODE}": "proxmox"}]} Unable to open zabbix_sender: Command '['/usr/bin/zabbix_sender', '-c/etc/zabbix/zabbix_agentd.conf', '-sproxmox.local', '-kproxmox.nodes.discovery', '-o{"data": [{"{#NODE}": "proxmox"}]}']' returned non-zero exit status 2. [root@appliance ~]#
[root@appliance ~]# /usr/bin/python3 /usr/lib/zabbix/bin/proxmox_cluster.py -a 192.168.0.173 -u zabbix@pve -p XXX -t proxmox.local -e -v { "status": { "quorate": 0, "cpu_total": 6, "ram_total": 16807809024, "ram_used": 10454061056, "ram_free": 6353747968, "ram_usage": 62.19764301862644, "ksm_sharing": 2002599936, "vcpu_allocated": 8, "vram_allocated": 24574427136, "vhdd_allocated": 34363932672, "vram_used": 8687667747, "vram_usage": 35.35247311736154, "vms_running": 5, "vms_stopped": 0, "vms_total": 5, "lxc_running": 0, "lxc_stopped": 0, "lxc_total": 0, "vm_templates": 0, "nodes_total": 0, "nodes_online": 1, "cpu_usage": 2.5157753984468902 }, "nodes": { "proxmox": { "online": 1, "vms_total": 5, "vms_running": 5, "lxc_total": 0, "lxc_running": 0, "vcpu_allocated": 8, "vram_allocated": 24574427136, "vhdd_allocated": 34363932672, "vram_used": 8687667747, "ksm_sharing": 2002599936, "cpu_total": 6, "cpu_usage": 2.5157753984468902, "ram_total": 16807809024, "ram_used": 10454061056, "ram_free": 6353747968, "ram_usage": 62.19764301862644 } } } proxmox.local promox.cluster.quorate 1593449419 0 proxmox.local promox.cluster.cpu_total 1593449419 6 proxmox.local promox.cluster.ram_total 1593449419 16807809024 proxmox.local promox.cluster.ram_used 1593449419 10454061056 proxmox.local promox.cluster.ram_free 1593449419 6353747968 proxmox.local promox.cluster.ram_usage 1593449419 62.19764301862644 proxmox.local promox.cluster.ksm_sharing 1593449419 2002599936 proxmox.local promox.cluster.vcpu_allocated 1593449419 8 proxmox.local promox.cluster.vram_allocated 1593449419 24574427136 proxmox.local promox.cluster.vhdd_allocated 1593449419 34363932672 proxmox.local promox.cluster.vram_used 1593449419 8687667747 proxmox.local promox.cluster.vram_usage 1593449419 35.35247311736154 proxmox.local promox.cluster.vms_running 1593449419 5 proxmox.local promox.cluster.vms_stopped 1593449419 0 proxmox.local promox.cluster.vms_total 1593449419 5 proxmox.local promox.cluster.lxc_running 1593449419 0 proxmox.local promox.cluster.lxc_stopped 1593449419 0 proxmox.local promox.cluster.lxc_total 1593449419 0 proxmox.local promox.cluster.vm_templates 1593449419 0 proxmox.local promox.cluster.nodes_total 1593449419 0 proxmox.local promox.cluster.nodes_online 1593449419 1 proxmox.local promox.cluster.cpu_usage 1593449419 2.5157753984468902 proxmox.local proxmox.node.online.[proxmox] 1593449419 1 proxmox.local proxmox.node.vms_total.[proxmox] 1593449419 5 proxmox.local proxmox.node.vms_running.[proxmox] 1593449419 5 proxmox.local proxmox.node.lxc_total.[proxmox] 1593449419 0 proxmox.local proxmox.node.lxc_running.[proxmox] 1593449419 0 proxmox.local proxmox.node.vcpu_allocated.[proxmox] 1593449419 8 proxmox.local proxmox.node.vram_allocated.[proxmox] 1593449419 24574427136 proxmox.local proxmox.node.vhdd_allocated.[proxmox] 1593449419 34363932672 proxmox.local proxmox.node.vram_used.[proxmox] 1593449419 8687667747 proxmox.local proxmox.node.ksm_sharing.[proxmox] 1593449419 2002599936 proxmox.local proxmox.node.cpu_total.[proxmox] 1593449419 6 proxmox.local proxmox.node.cpu_usage.[proxmox] 1593449419 2.5157753984468902 proxmox.local proxmox.node.ram_total.[proxmox] 1593449419 16807809024 proxmox.local proxmox.node.ram_used.[proxmox] 1593449419 10454061056 proxmox.local proxmox.node.ram_free.[proxmox] 1593449419 6353747968 proxmox.local proxmox.node.ram_usage.[proxmox] 1593449419 62.19764301862644
(b'Response from "127.0.0.1:10051": "processed: 0; failed: 38; total: 38; seconds spent: 0.000165"\nsent: 38; skipped: 0; total: 38\n', None) [root@appliance ~]#