Closed baby-gnu closed 1 year ago
Same issue on 6.6
OK, checked some virsh internals. Suspending the VM to S3 or S4 states depends on the default settings of the following options (opennebula doesn't set them):
<pm>
<suspend-to-disk enabled='yes'/>
<suspend-to-mem enabled='yes'/>
</pm>
If they are enabled
systemctl suspend
or a systemctl hibernate
virsh dompmsuspend one-$VMID mem
or a virsh dompmsuspend one-$VMID disk
can be executed
To wake up this VMs you need to execute a virsh dompmwakeup one-$VMID
In this case (power management), we get$ virsh list
Id Name State
----------------------------
8 one-57 pmsuspended
opennebula normally suspends the VMs with a virsh suspend one-$VMID
(state paused) and resumes them with virsh resume one-$VMID
. That doesn't rely on the power management of the relying domain (it could work with other hypervisors). In this case we have
$ virsh list
Id Name State
-----------------------
8 one-57 paused
Suspending the VM on the host:
[root@alma8-kvm-qcow2-6-7-05dhw-1 qemu]# virsh dompmsuspend --domain one-6 --target mem
Domain 'one-6' successfully suspended
[root@alma8-kvm-qcow2-6-7-05dhw-1 qemu]# virsh list
Id Name State
---------------------------
10 one-6 pmsuspended
Checking status on frontend and resuming with updates:
[oneadmin@alma8-kvm-qcow2-6-7-05dhw-0 root]$ onevm list --no-expand
ID USER GROUP NAME STAT CPU MEM HOST TIME
6 oneadmin oneadmin test susp 1 768M alma8-kvm- 0d 00h11
[oneadmin@alma8-kvm-qcow2-6-7-05dhw-0 root]$ onevm show 6 | grep STATE
STATE : SUSPENDED
LCM_STATE : LCM_INIT
[oneadmin@alma8-kvm-qcow2-6-7-05dhw-0 root]$ onevm resume 6
[oneadmin@alma8-kvm-qcow2-6-7-05dhw-0 root]$ onevm show 6 | grep STATE
STATE : ACTIVE
LCM_STATE : RUNNING
Description
When a VM has the power management tools and ask to suspend, it pass as
pmsuspended
invirsh list
output.Unfortunately, it can't be resumed by OpenNebula
``` Mon Apr 4 10:52:23 2022 [Z0][LCM][I]: Restoring VM Mon Apr 4 10:52:23 2022 [Z0][VM][I]: New state is ACTIVE Mon Apr 4 10:52:23 2022 [Z0][VM][I]: New LCM state is BOOT_SUSPENDED Mon Apr 4 10:52:23 2022 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context. Mon Apr 4 10:52:24 2022 [Z0][VMM][I]: ExitCode: 0 Mon Apr 4 10:52:24 2022 [Z0][VMM][I]: Successfully execute network driver operation: pre. Mon Apr 4 10:52:24 2022 [Z0][VMM][I]: Command execution fail (exit code: 1): cat << EOT | /var/tmp/one/vmm/kvm/restore '/var/lib/one//datastores/0/802559/checkpoint' 'nebula80' '262a55de-3e36-4507-b928-7aeb7d2611c3' 802559 nebula80 Mon Apr 4 10:52:24 2022 [Z0][VMM][E]: restore: Command "set -e -o pipefail Mon Apr 4 10:52:24 2022 [Z0][VMM][I]: Mon Apr 4 10:52:24 2022 [Z0][VMM][I]: # extract the xml from the checkpoint Mon Apr 4 10:52:24 2022 [Z0][VMM][I]: Mon Apr 4 10:52:24 2022 [Z0][VMM][I]: virsh --connect qemu+tls://localhost/system save-image-dumpxml /var/lib/one//datastores/0/802559/checkpoint > /var/lib/one//datastores/0/802559/checkpoint.xml Mon Apr 4 10:52:24 2022 [Z0][VMM][I]: Mon Apr 4 10:52:24 2022 [Z0][VMM][I]: # Eeplace all occurrences of the DS_LOCATION/To Reproduce
virsh dompmsuspend --target mem one-XXXX
onevm resume XXXX
Expected behavior
The virtual machine should be resumed with
virsh dompmwakeup one-XXXX
Details
Additional context Add any other context about the problem here.
Progress Status