ansible-collections / community.vmware

Ansible Collection for VMware
GNU General Public License v3.0
346 stars 336 forks source link

vmware_guest - network not attached upon vm poweron - 7.0U2d #1109

Closed ifelsefi closed 1 year ago

ifelsefi commented 2 years ago
SUMMARY

Hi, my code has not changed for several months.

Unfortunately now VMs do not have their networks attached upon power on.

You see in Ansible my task sets:

           "networks": [
                {
                    "device_type": "vmxnet3",
                    "dvswitch_name": "dswitch",
                    "name": "network",
                    "start_connected": true,
                    "state": "new",
                    "type": "dhcp",
                    "vlan": 1024
                }

Yet networks do not attach after power up unless I do that manually with my mouse in VCenter.

ISSUE TYPE
COMPONENT NAME

vmware_guest

ANSIBLE VERSION
  config file = /nfshome/me/repos/ansible/ansible.cfg
  configured module search path = ['/home/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/me/.local/lib/python3.8/site-packages/ansible
  ansible collection location = /home/me/.ansible/collections:/usr/share/ansible/collections
  executable location = /home/me/.local/bin/ansible-playbook
  python version = 3.8.11 (default, Sep  1 2021, 12:33:46) [GCC 9.3.1 20200408 (Red Hat 9.3.1-2)]
  jinja version = 3.0.3
  libyaml = True
COLLECTION VERSION
Collection        Version
----------------- -------
ansible.windows   1.7.3
community.general 3.7.0
community.vmware  1.16.0
community.windows 1.7.0
CONFIGURATION
DEFAULT_BECOME_METHOD(/nfshome/me/repos/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/nfshome/me/repos/ansible/ansible.cfg) = root
DEFAULT_FORKS(/nfshome/me/repos/ansible/ansible.cfg) = 500
DEFAULT_GATHERING(/nfshome/me/repos/ansible/ansible.cfg) = smart
DEFAULT_GATHER_SUBSET(/nfshome/me/repos/ansible/ansible.cfg) = ['all']
DEFAULT_GATHER_TIMEOUT(/nfshome/me/repos/ansible/ansible.cfg) = 5
DEFAULT_HOST_LIST(/nfshome/me/repos/ansible/ansible.cfg) = ['/home/me/repos/ansible/hosts']
DEFAULT_LOG_PATH(/nfshome/me/repos/ansible/ansible.cfg) = /var/log/ansible.log
DEFAULT_NO_TARGET_SYSLOG(/nfshome/me/repos/ansible/ansible.cfg) = True
DEFAULT_POLL_INTERVAL(/nfshome/me/repos/ansible/ansible.cfg) = 2
DEFAULT_REMOTE_PORT(/nfshome/me/repos/ansible/ansible.cfg) = 22
DEFAULT_TIMEOUT(/nfshome/me/repos/ansible/ansible.cfg) = 5
DEFAULT_TRANSPORT(/nfshome/me/repos/ansible/ansible.cfg) = smart
HOST_KEY_CHECKING(/nfshome/me/repos/ansible/ansible.cfg) = False
INJECT_FACTS_AS_VARS(/nfshome/me/repos/ansible/ansible.cfg) = True
INVENTORY_ENABLED(/nfshome/me/repos/ansible/ansible.cfg) = ['ini', 'aws_ec2']
PERSISTENT_COMMAND_TIMEOUT(/nfshome/me/repos/ansible/ansible.cfg) = 30
PERSISTENT_CONNECT_TIMEOUT(/nfshome/me/repos/ansible/ansible.cfg) = 3
OS / ENVIRONMENT

CentOS 7.9 VCenter 7.0U2d Python 3.8

STEPS TO REPRODUCE

ansible-playbook new_vm.yml -e hosts=k8s_ansible_test -e create=[] --ask-vault-pass

create.yaml ran from the new_vm role:

- name: create the vms
  tags: deploy
  vmware_guest:
    hostname: "{{ hostname }}"
    username: "{{ username }}"
    password: "{{ password }}"
    datacenter: "{{ datacenter }}"
    validate_certs: false
    cluster: "{{ vmware_cluster }}"
    folder: "{{ folder }}"
    annotation: "Created: {{ vm_date.stdout }}\nContact: {{ vm_contact }}\nDepartment: {{ vm_department }}\nDescription: {{ vm_purpose }}\nDeployed By: {{ vm_contact }}"
    name: "{{ inventory_hostname }}"
    template: vm-template
    guest_id: centos7_64Guest
    hardware:
      num_cpus: "{{ cores }}"
      memory_mb: "{{ memory }}"
    disk:
     - size_gb: "{{ disk }}"
       datastore: "{{ datastore }}"
       type: thin
    networks:
     - name: network
       start_connected: yes
       type: dhcp
       vlan: 1024
       device_type: vmxnet3
       state: new
       dvswitch_name: dswitch
    state: poweredon
    wait_for_ip_address: yes
    wait_for_ip_address_timeout: 30
  register: new_vm
EXPECTED RESULTS

VM powers on with network attached.

ACTUAL RESULTS

I run my code which powers on the vm but the network remains in disconnected state until I manually connect it in the VCenter web portal.

tansibletest9> EXEC /bin/sh -c 'echo ~me && sleep 0'
<vm-ansibletest> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/me/.ansible/tmp `"&& mkdir "` echo /home/me/.ansible/tmp/ansible-tmp-1636579272.811254-3474-48091896620470 `" && echo ansible-tmp-1636579272.811254-3474-48091896620470="` echo /home/me/.ansible/tmp/ansible-tmp-1636579272.811254-3474-48091896620470 `" ) && sleep 0'
redirecting (type: modules) ansible.builtin.vmware_guest to community.vmware.vmware_guest
Using module file /home/me/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_guest.py
<vm-ansibletest> PUT /nfshome/me/.ansible/tmp/ansible-local-3420i_enz5go/tmpeyj6niy7 TO /nfshome/me/.ansible/tmp/ansible-tmp-1636579272.811254-3474-48091896620470/AnsiballZ_vmware_guest.py
<vm-ansibletest> EXEC /bin/sh -c 'chmod u+x /home/me/.ansible/tmp/ansible-tmp-1636579272.811254-3474-48091896620470/ /home/me/.ansible/tmp/ansible-tmp-1636579272.811254-3474-48091896620470/AnsiballZ_vmware_guest.py && sleep 0'
<vm-ansibletest> EXEC /bin/sh -c '/opt/rh/rh-python38/root/usr/bin/python3 /home/me/.ansible/tmp/ansible-tmp-1636579272.811254-3474-48091896620470/AnsiballZ_vmware_guest.py && sleep 0'
<vm-ansibletest> EXEC /bin/sh -c 'rm -f -r /home/me/.ansible/tmp/ansible-tmp-1636579272.811254-3474-48091896620470/ > /dev/null 2>&1 && sleep 0'

            "migrate.hostLogState": "none",
            "migrate.migrationId": "9210802168331749337",
            "monitor.phys_bits_used": "43",
            "numa.autosize.cookie": "20001",
            "numa.autosize.vcpu.maxPerVirtualNode": "2",
            "nvram": "vm-ansibletest.nvram",
            "pciBridge0.pciSlotNumber": "17",
            "pciBridge0.present": "TRUE",
            "pciBridge4.functions": "8",
            "pciBridge4.pciSlotNumber": "21",
            "pciBridge4.present": "TRUE",
            "pciBridge4.virtualDev": "pcieRootPort",
            "pciBridge5.functions": "8",
            "pciBridge5.pciSlotNumber": "22",
            "pciBridge5.present": "TRUE",
            "pciBridge5.virtualDev": "pcieRootPort",
            "pciBridge6.functions": "8",
            "pciBridge6.pciSlotNumber": "23",
            "pciBridge6.present": "TRUE",
            "pciBridge6.virtualDev": "pcieRootPort",
            "pciBridge7.functions": "8",
            "pciBridge7.pciSlotNumber": "24",
            "pciBridge7.present": "TRUE",
            "pciBridge7.virtualDev": "pcieRootPort",
            "sata0.pciSlotNumber": "33",
            "sched.cpu.latencySensitivity": "normal",
            "sched.swap.derivedName": "/vmfs/volumes/vvol:8619de72d2ab39a4-bf19fdfebbfb3aaf/rfc4122.f4623aba-a06e-4438-865d-aed5b5c7db93/vm-ansibletest-f19c98ef.vswp",
            "scsi0.pciSlotNumber": "160",
            "scsi0.sasWWID": "50 05 05 6d 05 10 8f b0",
            "scsi0:0.redo": "",
            "softPowerOff": "FALSE",
            "svga.guestBackedPrimaryAware": "TRUE",
            "svga.present": "TRUE",
            "tools.guest.desktop.autolock": "FALSE",
            "tools.remindInstall": "FALSE",
            "viv.moid": "DB44EEBA-5306-434F-A333-974C964FCB46:vm-585428:yRZdyyfCgPj2S0qHodsTeh8A7rxbjFas9tNFyhHlFBI=",
            "vmci0.pciSlotNumber": "32",
            "vmotion.checkpointFBSize": "4194304",
            "vmotion.checkpointSVGAPrimarySize": "8388608",
            "vmware.tools.internalversion": "11269",
            "vmware.tools.requiredversion": "11333"
        },
        "annotation": "Created: 2021-11-10\nContact: doug@domain.example.com\nDepartment: HPC\nDescription: ansible debugging control pane node\nDeployed By: doug@domain.example.com",
        "current_snapshot": null,
        "customvalues": {},
        "guest_consolidation_needed": false,
        "guest_question": null,
        "guest_tools_status": "guestToolsRunning",
        "guest_tools_version": "11269",
        "hw_cluster": "vmware-cluster",
        "hw_cores_per_socket": 1,
        "hw_datastores": [

        "hw_name": "vm-ansibletest",
        "hw_power_status": "poweredOn",
        "hw_processor_count": 2,
        "hw_product_uuid": "4200391d-0510-8fbd-229c-fa9cffdb9d39",
        "hw_version": "vmx-14",
        "instance_uuid": "50009004-ed6f-cdc9-4a30-e35f758624da",
        "ipv4": null,
        "ipv6": null,
        "module_hw": true,
        "moid": "vm-585428",
        "snapshots": [],
        "tpm_info": {
            "provider_id": null,
            "tpm_present": false
        },
        "vimref": "vim.VirtualMachine:vm-585428",
        "vnc": {}
    },
    "invocation": {
        "module_args": {
            "advanced_settings": [],
            "annotation": "Created: 2021-11-10\nContact: doug@domain.example.com\nDepartment: HPC\nDescription: ansible debugging control pane node\nDeployed By: doug@domain.example.com",
            "cdrom": [],
            "cluster": "vmware-cluster",
            "convert": null,
            "customization": {
                "autologon": null,
                "autologoncount": null,
                "dns_servers": null,
                "dns_suffix": null,
                "domain": null,
                "domainadmin": null,
                "domainadminpassword": null,
                "existing_vm": null,
                "fullname": null,
                "hostname": null,
                "hwclockUTC": null,
                "joindomain": null,
                "joinworkgroup": null,
                "orgname": null,
                "password": null,
                "productid": null,
                "runonce": null,
                "timezone": null
            },
            "customization_spec": null,
            "customvalues": [],
            "datacenter": "HDC-Systems",
            "datastore": null,
            "delete_from_inventory": false,
            "disk": [
                {

                "memory_mb": 2048,
                "memory_reservation_lock": null,
                "nested_virt": null,
                "num_cpu_cores_per_socket": null,
                "num_cpus": 2,
                "scsi": null,
                "secure_boot": null,
                "version": null,
                "virt_based_security": null
            },
            "hostname": "vcenter02",
            "is_template": false,
            "linked_clone": false,
            "name": "vm-ansibletest",
            "name_match": "first",
            "networks": [
                {
                    "device_type": "vmxnet3",
                    "dvswitch_name": "dswitch",
                    "name": "network",
                    "start_connected": true,
                    "state": "new",
                    "type": "dhcp",
                    "vlan": 1024
                }
            ],
            "nvdimm": {
                "label": null,
                "size_mb": 1024,
                "state": null
            },
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "port": 443,
            "proxy_host": null,
            "proxy_port": null,
            "resource_pool": null,
            "snapshot_src": null,
            "state": "poweredon",
            "state_change_timeout": 0,
            "template": "vm-template",
            "use_instance_uuid": false,
            "username": "vcenteruser@domain.example.com",
            "uuid": null,
            "validate_certs": false,
            "vapp_properties": [],
            "wait_for_customization": false,
            "wait_for_customization_timeout": 3600,
            "wait_for_ip_address": true,
            "wait_for_ip_address_timeout": 30
        }
    }
}
ifelsefi commented 2 years ago

So I gathered network data by running vmware_guest_network with gather_network_info: true using the same key values I set for attaching the nic with vmware_guest:

    },
    "network_info": [
        {
            "allow_guest_ctl": true,
            "connected": false,
            "device_type": "vmxnet3",
            "label": "Network adapter 1",
            "mac_addr": "00:50:56:80:f9:43",
            "mac_address": "00:50:56:80:f9:43",
            "name": "nic-i-configured",
            "network_name": "nic-i-configured",
            "start_connected": false,
            "switch": "dswitch-i-set",
            "unit_number": 7,
            "wake_onlan": true
        }
    ]
}

Notice connected: false and start_connected: false which is the opposite of what I set. So those are being rejected by VCenter given I am sending them with Ansible?

BaryaPS commented 2 years ago

Have the same issue

ifelsefi commented 2 years ago

Hi @BaryaPS what version of VCenter are you running? We did a minor update from 7.0U2C to 7.0U2d so I am wondering if this could be related.

BaryaPS commented 2 years ago

We are using vCenter version 7.0.2 Build 18356314

Sispheor commented 2 years ago

Not working as well during VM creation. I get this error:

Customization of the guest operating system is not supported due to the given reason: Tools is not installed in the GuestOS. Please install the latest version of open-vm-tools or VMware Tools to enable GuestCustomization.

If I re-run the playbook the VM is well configured.

vCenter Version: 7.0.3 Build: 18778458

anjia0532 commented 2 years ago

Install Centos and Install VmwareTools as Template e.g. centos-7-2009-template.

Create Guest use centos-7-2009-template .

It's works for me.

In My case, the centos is CentOS-7-x86_64-Minimal-2009.iso,

  1. it lose /usr/bin/perl , you need configuration/restart network, and install perl first (yum install perl gcc make kernel-headers kernel-devel -y),
  2. mount VMwareTools iso
  3. tar zxf VmwareTools and install vmwareTools cd vmware-tools-distrib && ./vmware-install.pl
  4. Convert this guest vm to vmware Template
  5. Create Guest by this vmware Template
Sispheor commented 2 years ago

Hi @anjia0532. Thanks for your answer.

In my case I work with RHCOS which is stateless. Not sure if I can add a package in it.

And something is weird anyway. This append during the copy of the template. The OS is not started yet and so vm tools are not executed. How can a package installed inside the template can change the behavior of the vcenter instanciation?

Sispheor commented 2 years ago

And VM tools seems to be present in RHCOS. Once the VM is deployed I have this message:

Version Status:Guest Managed
The VMware Tools status is unknown. A VMware Tools implementation is installed on the guest operating system, but it is not managed by VMware.
Sispheor commented 2 years ago

This error seems to appear when using template. When i create an empty VM it's ok.

anjia0532 commented 2 years ago

Ubuntu 20.04/18.04 LTS server is ok(using template), Centos-7-2009 mini is not(using template), but install vmware-tools and recreate template is ok . @Sispheor

My Env vCenter is 7.0.3 (18778458) ,esxi is 7.0 Update 3 ,community.vmware 1.17.0 , pyvmomi 7.0.3

ansible [core 2.12.0] config file = None python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0] jinja version = 2.10.1 libyaml = True

ikke-t commented 2 years ago

I experience this on RHEL 8.5. I create VM by using packer, at which stage the nework works fine. Packer saves it as template, and I create a new VM using it, and it always comes as disconnected. I need to manually go and edit vm, and press connect. After that the network works.

I have: open-vm-tools-11.2.5-2.el8.x86_64 vSphere Client version 7.0.3.00100

I create the template using this playbook: https://github.com/RedHatNordicsSA/cool-lab/blob/packer/build-rhel-template-packer-vmware.yml

which uses this packer ansibles: https://github.com/ikke-t/ansible-packer/blob/master/templates/build-vmware.json.j2

And I start the vm like this from the template: https://github.com/RedHatNordicsSA/cool-lab/blob/main/ensure-vm-state.yml

Which has:

        networks:
          - name: "{{ net_name }}"
            type: dhcp
            wait_for_ip_address_timeout: 60
            connected: true
            start_connected: true
ifelsefi commented 2 years ago

@ikke-t yes that's the same issue I have and want to avoid. I do not want to manually edit the NIC and set it to connected since this isn't sustainable in a large environment.

start_connected: true should take care of that. It used to work.

ikke-t commented 2 years ago

I found it was due VM missing perl. It is lacking from pkg dependencies, and unfortunately won't get fixed. I wish it would make into documentation of the module. It made me waste quite some hours.

https://bugzilla.redhat.com/show_bug.cgi?id=2035202

Akasurde commented 2 years ago

@ikke-t This is already mentioned in the vmware_guest docs - https://github.com/ansible-collections/community.vmware/blob/c9c5bae2ecb616bb3c4e83cdded5be65ff0461e8/docs/community.vmware.vmware_guest_module.rst (see customization parameter docs)

ikke-t commented 2 years ago

I wasn't doing customizations, thus I didn't notice that. I was reading the network parts, connected snd start_connected. That part doesn't mention it, and lack of perl directly causes problems there.

ikke-t commented 2 years ago

I don't know if I would have noticed it, but it might be good to have it early in the doc page next to requirements: "Network initialization and customization of Linux VM will fail unless either cloud-init or perl is installed into VM template."

Sispheor commented 2 years ago

I don't really understand why something is required inside the template. When we copy the template, the fact that vmware attach a network is completely independent from the network configuration in the system itself, doesn't it?

There is something weird. On my side I have to execute twice the playbook. The first time the VM envelop is created. But network device fail to be attached. The second time the device is connected. And then I can start the VM.

As the VM is not started, there is no perl, vm-tools or whatever involved in the process of just attaching the device. Or I'm missing something?

Findarato commented 2 years ago

I wanted to add I am having the same issue with cloning windows 2019 Templates.

mgaruccio commented 2 years ago

I'm having the same issue with a server 2019 template. start_connected isn't being honored when creating the VM so the network adapter is not starting connected. This is unrelated to the perl issues mentioned earlier in the thread since it's happening at the vmware level pre-poweron.

I was able to work around the issue by removing the network adapter from my template, starting the VM in a powered off state, and adding the NIC using the community.vmware.vmware_guest_network module.

cooling75 commented 2 years ago

We got the same issue and the workaround from @mgaruccio works for us.

IncredibleRichie commented 1 year ago

Same here, installing perl on the template solved the issue for me.

ifelsefi commented 1 year ago

Adding cloud-init and remove NIC from template fixed the issue. Thanks everyone!