Closed maxilee closed 1 year ago
Hi, you can specify proxmox version or use command pveversion -v?
Best reagrds
Hi. latest one :
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve) pve-manager: 7.4-4 (running version: 7.4-4/4a8501a8) pve-kernel-5.15: 7.4-3 pve-kernel-5.13: 7.1-9 pve-kernel-5.11: 7.0-10 pve-kernel-5.4: 6.4-6 pve-kernel-5.15.107-2-pve: 5.15.107-2 pve-kernel-5.15.85-1-pve: 5.15.85-1 pve-kernel-5.13.19-6-pve: 5.13.19-15 pve-kernel-5.13.19-2-pve: 5.13.19-4 pve-kernel-5.11.22-7-pve: 5.11.22-12 pve-kernel-5.4.140-1-pve: 5.4.140-1 pve-kernel-5.4.34-1-pve: 5.4.34-2 ceph-fuse: 15.2.17-pve1 corosync: 3.1.7-pve1 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown: 0.8.36+pve2 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.24-pve2 libproxmox-acme-perl: 1.4.4 libproxmox-backup-qemu0: 1.3.1-1 libproxmox-rs-perl: 0.2.1 libpve-access-control: 7.4-3 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.4-1 libpve-guest-common-perl: 4.2-4 libpve-http-server-perl: 4.2-3 libpve-rs-perl: 0.7.6 libpve-storage-perl: 7.4-3 libqb0: 1.0.5-1 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 5.0.2-2 lxcfs: 5.0.3-pve1 novnc-pve: 1.4.0-1 openvswitch-switch: 2.15.0+ds1-2+deb11u4 proxmox-backup-client: 2.4.2-1 proxmox-backup-file-restore: 2.4.2-1 proxmox-kernel-helper: 7.4-1 proxmox-mail-forward: 0.1.1-1 proxmox-mini-journalreader: 1.3-1 proxmox-offline-mirror-helper: 0.5.1-1 proxmox-widget-toolkit: 3.7.0 pve-cluster: 7.3-3 pve-container: 4.4-4 pve-docs: 7.4-2 pve-edk2-firmware: 3.20230228-2 pve-firewall: 4.3-2 pve-firmware: 3.6-5 pve-ha-manager: 3.6.1 pve-i18n: 2.12-1 pve-qemu-kvm: 7.2.0-8 pve-xtermjs: 4.16.0-2 qemu-server: 7.4-3 smartmontools: 7.2-pve3 spiceterm: 3.2-2 swtpm: 0.8.0~bpo11+3 vncterm: 1.7-1 zfsutils-linux: 2.1.11-pve1
Hi,
can esecute command and attach output?
pvesh get /nodes/<node name>/disks/zfs --output-format json-pretty
best reagrds
Remember permission user 'read all'
I've got three nodes in cluster :
root@pve-1:~# pvesh get /nodes/pve-1/disks/zfs --output-format json-pretty
[
{
"alloc" : 17659121664,
"dedup" : 1.04,
"frag" : 47,
"free" : 50523484160,
"health" : "ONLINE",
"name" : "rpool",
"size" : 68182605824
},
{
"alloc" : 3358733287424,
"dedup" : 1.31,
"frag" : 7,
"free" : 8426656972800,
"health" : "ONLINE",
"name" : "rtank",
"size" : 11785390260224
}
]
root@pve-1:~# pvesh get /nodes/pve-2/disks/zfs --output-format json-pretty
[
{
"alloc" : 23886680064,
"dedup" : 1.02,
"frag" : 59,
"free" : 44295925760,
"health" : "ONLINE",
"name" : "rpool",
"size" : 68182605824
},
{
"alloc" : 3355560869888,
"dedup" : 1.64,
"frag" : 8,
"free" : 8429829390336,
"health" : "ONLINE",
"name" : "rtank",
"size" : 11785390260224
}
]
root@pve-1:~# pvesh get /nodes/pve-3/disks/zfs --output-format json-pretty
[
{
"alloc" : 14952067072,
"dedup" : 1.04,
"frag" : 48,
"free" : 53230538752,
"health" : "ONLINE",
"name" : "rpool",
"size" : 68182605824
},
{
"alloc" : 5375985582080,
"dedup" : 1.35,
"frag" : 9,
"free" : 6409404678144,
"health" : "ONLINE",
"name" : "rtank",
"size" : 11785390260224
}
]
Hi, i found the problem, dedup is double not int. In the next release it will be fixed
best regards
ok. glad to hear that. I'll wait for next release ...
Bug type
Component
Component name
Recurring Jobs
What happened?
Hi. I've installed latest version in docker , configured access to my pve cluster. added few Recurring Jobs but somethink fails. logs from recurring job attached below.
RecurringJobId | "Corsinvest.ProxmoxVE.Admin.Diagnostic.Job-pve-cluster"
how can diagnose this ?
Expected behavior
recurring job works as it should.
Relevant log output
Proxmox VE Version
7.4
Version (bug)
v1.0.0-rc.1
Version (working)
No response
What browsers are you seeing the problem on?
Firefox
On what operating system are you experiencing the issue?
Linux
Pull Request
Code of Conduct