Open justinc1 opened 1 year ago
further testing has shown that tiering priority on SECOND disk does in fact require 2 passes to actually change (confirmed looking at UI value on second disk - waited several minutes)
@ddemlow @justinc1 I have done extensive testing regarding this issue today. It seems like the problem is coming from API backend. When a disk is being created the tiering priority is automatically set to 4 (8 backend), and ignores the actual tiering priority being sent in a create request. This happens during disk creation only, that's why the module is not idempotent. From what I've been able to test - we are sending correct values to the API. This is the playbook I used to test:
- name: Create test VM and change tiering priority.
hosts: localhost
tasks:
- name: Create XLAB-test-tiering-prio-VM-UI.
scale_computing.hypercore.vm:
cluster_instance:
host: ***********
username: ***********
password: ***********
state: present
tags:
- Xlab
memory: "{{ '2048 MB' | human_to_bytes }}"
vcpu: 2
power_state: stop
vm_name: XLAB-test-tiering-prio-VM-UI
disks: []
nics: []
register: testout
- name: Change tiering prio on XLAB-test-tiering-prio-VM-UI.
scale_computing.hypercore.vm_disk:
cluster_instance:
host: ***********
username: ***********
password: ***********
state: set
vm_name: XLAB-test-tiering-prio-VM-UI
items:
- disk_slot: 0
tiering_priority_factor: 1
type: virtio_disk
size: "{{ '100 GB' | human_to_bytes }}"
- disk_slot: 0
type: ide_cdrom
iso_name: TinyCore-vm.iso
- disk_slot: 1
tiering_priority_factor: 1
type: ide_disk
size: "{{ '10.1 GB' | human_to_bytes }}"
register: testout
- name: Show output
debug:
var: testout
- name: Wait N sec - tieringPriorityFactor should change
ansible.builtin.pause:
seconds: 30
- name: Change tiering prio on XLAB-test-tiering-prio-VM-UI. (SECOND TIME)
scale_computing.hypercore.vm_disk:
cluster_instance:
host: ***********
username: ***********
password: ***********
state: set
vm_name: XLAB-test-tiering-prio-VM-UI
items:
- disk_slot: 0
tiering_priority_factor: 1
type: virtio_disk
size: "{{ '100 GB' | human_to_bytes }}"
- disk_slot: 0
type: ide_cdrom
iso_name: TinyCore-vm.iso
- disk_slot: 1
tiering_priority_factor: 1
type: ide_disk
size: "{{ '10.1 GB' | human_to_bytes }}"
register: testout
- name: Show output
debug:
var: testout
- name: Wait N sec - tieringPriorityFactor should change
ansible.builtin.pause:
seconds: 30
- name: Change tiering prio on XLAB-test-tiering-prio-VM-UI. (THIRD TIME) - Should be idempotent by now?
scale_computing.hypercore.vm_disk:
cluster_instance:
host: ***********
username: ***********
password: ***********
state: set
vm_name: XLAB-test-tiering-prio-VM-UI
items:
- disk_slot: 0
tiering_priority_factor: 1
type: virtio_disk
size: "{{ '100 GB' | human_to_bytes }}"
- disk_slot: 0
type: ide_cdrom
iso_name: TinyCore-vm.iso
- disk_slot: 1
tiering_priority_factor: 1
type: ide_disk
size: "{{ '10.1 GB' | human_to_bytes }}"
register: testout
- name: Show output
debug:
var: testout
I will log a ticket to confirm / address this on hypercore rest api as that appears to be a bug ... would it still be possible / make sense to have the module be aware of this api behaviors to make it idempotent? create the disk, wait for that api task to complete and then set tiering priority? (similar to how other multi-step tasks may be handled under the hood by ansible module - like deleting a powered on VM as one example ... power off first, wait, then delete) - (internal reference on rest api issue issues/5143)
waiting for scale REST API fix
My console output
The playbook
examples/dd_a.yml
:On 3rd run it task "Security Vm disk desired configuration" does report changed=false as expected.
And original comment from Dave: