ansible-collections / community.vmware

Ansible Collection for VMware
GNU General Public License v3.0
352 stars 336 forks source link

Disk type always is "Lazy zeroed thick disks" and it doesn't change #278

Open Stogrammus opened 4 years ago

Stogrammus commented 4 years ago
SUMMARY

When I create a virtual machine from a VMware template and change the disk size The disk always changes its type to "Lazy zeroed thick disks". An ansible playbook disc type is "Eager zeroed thick disks". I tried to use an option "convert: eagerzeroedthick", but the disk type remains "Lazy zeroed thick disks" after the virtual machine was created. I tried to use in playbook: type: eagerzeroedthick type: "Eager zeroed thick disks" But disk type still is "Lazy zeroed thick disks". Option "type" in disk doesn't work.

ISSUE TYPE
COMPONENT NAME
ANSIBLE VERSION
ansible 2.9.9
  config file = /home/ansible/ansible.cfg
  configured module search path = [u'/home/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Aug  7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
CONFIGURATION
OS / ENVIRONMENT

Hypervisor: VMware ESXi, 6.5.0, 14990892 pyvmomi 7.0 pysphere 0.1.7

STEPS TO REPRODUCE
- name: Create a VM from a template
  hosts: localhost
  gather_facts: no
  become: true
  become_user: root
  vars_files:
    - /home/user/ansible/vars/vmware_vars2.yml
  vars_prompt:
   - name: vcenter_pass
     prompt: Enter the password for "{{ vcenter_user }}"
  tasks:
  - name: Clone the template
    vmware_guest:
      password: "{{ vcenter_pass }}"
      validate_certs: False
      name: "{{ guest_name }}"
      hostname: "{{ vcenter_server }}"
      username: "{{ vcenter_user }}"
      template: "{{ template }}"
      esxi_hostname: "{{ esxi_hostname }}"
      datacenter: Pravda
      folder: Ansible_test
      datastore: "{{ datastore }}"
      state: present
      disk:
        - size_gb: "{{ disk_size }}"
          type: eagerzeroedthick
          datastore: "{{ datastore }}"
          state: present
          scsi_controller: 0
          unit_number: 0
      #convert: eagerzeroedthick
      networks:
      - name: "{{ network_name }}"
        ip: "{{ ip_guest }}"
        netmask: "{{ mask_guest }}"
        gateway: "{{ gateway_guest }}"
        type: static
        start_connected: True
      customization:
        hostname: "{{ guest_name }}"
      wait_for_customization: "true"
      wait_for_ip_address: "true"
EXPECTED RESULTS
ACTUAL RESULTS
ansible-playbook 2.9.9
  config file = /home/user/ansible/ansible.cfg
  configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.7.5 (default, Aug  7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /home/user/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/user/ansible/hosts as it did not pass its verify_file() method
script declined parsing /home/user/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /home/user/ansible/hosts as it did not pass its verify_file() method
Parsed /home/user/ansible/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/default.pyc

PLAYBOOK: create_vm_from_template_and_customizate-test.yml *******************************************************************************************************************************************************************************
Positional arguments: playbooks/create_vm_from_template_and_customizate-test.yml
remote_user: user
become_method: sudo
inventory: (u'/home/user/ansible/hosts',)
forks: 5
tags: (u'all',)
verbosity: 4
connection: smart
timeout: 10
1 plays in playbooks/create_vm_from_template_and_customizate-test.yml
Read vars_file '/home/user/ansible/vars/vmware_vars2.yml'
Enter the password for "user@domain": 
Read vars_file '/home/user/ansible/vars/vmware_vars2.yml'
Read vars_file '/home/user/ansible/vars/vmware_vars2.yml'

PLAY [Create a VM from a template] *******************************************************************************************************************************************************************************************************
META: ran handlers
Read vars_file '/home/user/ansible/vars/vmware_vars2.yml'

TASK [Clone the template] ****************************************************************************************************************************************************************************************************************
task path: /home/user/ansible/playbooks/create_vm_from_template_and_customizate-test.yml:12
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir /home/user/.ansible/tmp/ansible-tmp-1594108835.02-18720-127296298221494 && echo ansible-tmp-1594108835.02-18720-127296298221494="` echo /home/user/.ansible/tmp/ansible-tmp-1594108835.02-18720-127296298221494 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_guest.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-186648VFjaV/tmpkqjRxu TO /home/user/.ansible/tmp/ansible-tmp-1594108835.02-18720-127296298221494/AnsiballZ_vmware_guest.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1594108835.02-18720-127296298221494/ /home/user/.ansible/tmp/ansible-tmp-1594108835.02-18720-127296298221494/AnsiballZ_vmware_guest.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n  -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-xmbyhodoofzogmafytfmsiywzcvijnwz ; /usr/bin/python2 /home/user/.ansible/tmp/ansible-tmp-1594108835.02-18720-127296298221494/AnsiballZ_vmware_guest.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1594108835.02-18720-127296298221494/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
    "changed": true, 
    "instance": {
        "annotation": "", 
        "current_snapshot": null, 
        "customvalues": {}, 
        "guest_consolidation_needed": false, 
        "guest_question": null, 
        "guest_tools_status": "guestToolsRunning", 
        "guest_tools_version": "10346", 
        "hw_cluster": null, 
        "hw_cores_per_socket": 1, 
        "hw_datastores": [
            "vmware.data"
        ], 
        "hw_esxi_host": "vmware.local", 
        "hw_eth0": {
            "addresstype": "assigned", 
            "ipaddresses": [
                "10.10.10.148", 
                "fe80::250:56ff:fe98:7076"
            ], 
            "label": "Network adapter 1", 
            "macaddress": "00:50:56:98:70:76", 
            "macaddress_dash": "00-50-56-98-70-76", 
            "portgroup_key": null, 
            "portgroup_portkey": null, 
            "summary": "VM Network"
        }, 
        "hw_files": [
            "[vmware.data] ********/********.vmx", 
            "[vmware.data] ********/********.nvram", 
            "[vmware.data] ********/********.vmsd", 
            "[vmware.data] ********/********.vmdk"
        ], 
        "hw_folder": "/vmware/vm/Ansible_test", 
        "hw_guest_full_name": "Red Hat Enterprise Linux 7 (64-bit)", 
        "hw_guest_ha_state": null, 
        "hw_guest_id": "rhel7_64Guest", 
        "hw_interfaces": [
            "eth0"
        ], 
        "hw_is_template": false, 
        "hw_memtotal_mb": 8192, 
        "hw_name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
        "hw_power_status": "poweredOn", 
        "hw_processor_count": 2, 
        "hw_product_uuid": "42188387-17ab-6878-294d-acdfd8aaa27a", 
        "hw_version": "vmx-13", 
        "instance_uuid": "50181092-606d-914f-4f2e-2fcb6ef75fc6", 
        "ipv4": "10.10.10.148", 
        "ipv6": null, 
        "module_hw": true, 
        "moid": "vm-2130", 
        "snapshots": [], 
        "vimref": "vim.VirtualMachine:vm-2130", 
        "vnc": {}
    }, 
    "invocation": {
        "module_args": {
            "annotation": null, 
            "cdrom": [], 
            "cluster": null, 
            "convert": null, 
            "customization": {
                "hostname": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
            }, 
            "customization_spec": null, 
            "customvalues": [], 
            "datacenter": "vmware", 
            "datastore": "vmware.data", 
            "disk": [
                {
                    "datastore": "vmware.data", 
                    "scsi_controller": 0, 
                    "size_gb": 40, 
                    "state": "present", 
                    "type": "eagerzeroedthick", 
                    "unit_number": 0
                }
            ], 
            "esxi_hostname": "vmware.local", 
            "folder": "Ansible_test", 
            "force": false, 
            "guest_id": null, 
            "hardware": {}, 
            "hostname": "vmware", 
            "is_template": false, 
            "linked_clone": false, 
            "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
            "name_match": "first", 
            "networks": [
                {
                    "gateway": "10.10.10.1", 
                    "ip": "10.10.10.148", 
                    "name": "VM Network", 
                    "netmask": "255.255.255.0", 
                    "start_connected": true, 
                    "type": "static"
                }
            ], 
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
            "port": 443, 
            "proxy_host": null, 
            "proxy_port": null, 
            "resource_pool": null, 
            "snapshot_src": null, 
            "state": "present", 
            "state_change_timeout": 0, 
            "template": "rhel-7-ansible-template1", 
            "use_instance_uuid": false, 
            "username": "user@domain", 
            "uuid": null, 
            "validate_certs": false, 
            "vapp_properties": [], 
            "wait_for_customization": true, 
            "wait_for_ip_address": true
        }
    }
}
META: ran handlers
META: ran handlers

PLAY RECAP *******************************************************************************************************************************************************************************************************************************
localhost                  : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ansibullbot commented 4 years ago

Files identified in the description: None

If these files are inaccurate, please update the component name section of the description or use the !component bot command.

click here for bot help

goneri commented 3 years ago

Hi @Stogrammus,

Can you please check if you face the same problem with the vmware_guest_disk module? https://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_guest_disk_module.html#parameter-disk/type

aspeer06 commented 3 years ago

I can confirm guest disk thick provisioning works but template cloning does not.

lebonez commented 3 years ago

I have figured out why this happens. Pretty much throughout the VMWare community modules disk specs are being set to eagerlyScrub even though just changing this flag does nothing. When creating a new disk, this is fine because the specs are respected. Unfortunately, when editing a cloned or previous disk just setting disk_spec.device.backing.eagerlyScrub = True does not take effect. What is needed is to actually call eager scrub on the back-end which would be something like this self.content.virtualDiskManager..EagerZeroVirtualDisk_Task(disk_device.backing.fileName, datacenter) at which point when re-configuring the VM the disk will now be flagged as eager zeroed thick because the disk is actually fully eager zeroed.

Worst part about the eager zero task is that the VM must be powered off to run this so In my opinion there currently exists no reliable way to deploy a template to eager zero thick provision if changing the disk size (not including vmotion). If you are doing a direct clone where the disk size doesn't change VMWare will respect the eagerlyScrub flag because doing the eager scrub task above isn't required if the template itself is eagerly zeroed. If it isn't eagerly zeroed then we will be back at square one because it'll lazy zero it to the same "zeroed" state of the template.

Hope my assumption is correct and this makes sense.