Closed justinc1 closed 1 year ago
The problematic part in code is in Disk.needs_reboot,action == "delete" and self.type == "ide_cdrom"
, https://github.com/ScaleComputing/HyperCoreAnsibleCollection/blob/main/plugins/module_utils/disk.py#L202. It limits VM shutdown before disk remove only type is ide_cdrom.
I tested via management UI (HyperCore v9.2.17, on NUC). VM was booted, there was no OS running (empty disks).
That time difference is suspicious. Maybe HyperCore would actually remove (VIRTIO) disks from running VM if some condition would be fulfilled?
In both cases a warning pops up:
Confirm Delete Block Device
Are you sure you want to delete the block device
0f513732 (101 GB)?
If this drive is in use by the guest OS it may not be removed until the next VM reboot.
If I force shutdown and start VM back after delete failed, the disks are not removed. I'm confused when would that "delayed delete" happen.
Update: did the same test on 9.1.14 VSNS (https://10.5.11.200/):
So VIRTIO disk can be removed from running VM on HyperCore 9.1.14, but not on 9.2.17. I wonder if IDE disk can be removed on from running VM on some other version? And how is HyperCore supposed to behave if we want to attach a new disk to running VM - this should be possible without reboot, right?
Describe the bug
https://github.com/ScaleComputing/HyperCoreAnsibleCollection/actions/runs/5383096735/jobs/9769417342#step:9:49 vm module failed to remove disk from existing VM.
VM demo-vm was running, and had one extra disk:
To Reproduce Steps to reproduce the behavior:
Expected behavior
vm module should shutdown demo-vm and remove the extra disk.
Also error message should include details - taskTag, formattedDescription, formattedMessage, etc.
Screenshots
If applicable, add screenshots to help explain your problem.
System Info (please complete the following information):
Additional context