ansible-collections / community.vmware

Ansible Collection for VMware
GNU General Public License v3.0
348 stars 335 forks source link

vmware_guest: MAC address conflict while doing the deployment from a template #1391

Open Udayendu opened 2 years ago

Udayendu commented 2 years ago
SUMMARY

vmware_guest is not able to handle the MAC distribution while doing the deployment of multiple vms from the same template. Its duplicating the MAC and as a result vms are not getting the IP as the nics are not getting attached.

ISSUE TYPE
COMPONENT NAME
ANSIBLE VERSION
$ pip3 show ansible
Name: ansible
Version: 5.8.0
Summary: Radically simple IT automation
Home-page: https://ansible.com/
Author: Ansible, Inc.
Author-email: info@ansible.com
License: GPLv3+
Location: /usr/local/lib/python3.8/dist-packages
Requires: ansible-core
Required-by:
COLLECTION VERSION
$ ansible-galaxy collection list community.vmware

# /usr/local/lib/python3.8/dist-packages/ansible_collections
Collection       Version
---------------- -------
community.vmware 1.18.0
CONFIGURATION
OS / ENVIRONMENT
STEPS TO REPRODUCE
- name: Gather info from WDC host
  vmware_guest_info:
    hostname: '{{ vcenter_hostname }}'
    username: '{{ vcenter_username }}'
    password: '{{ vcenter_password }}'
    datacenter: '{{ vsphere_datacenter }}'
    name: '{{ wdc_vm_name }}'
    validate_certs: 'no'
  delegate_to: localhost
  register: wdc_info

- name: Deploying vm from '{{ win_temp }}'
  vmware_guest:
    hostname: '{{ vcenter_hostname }}'
    username: '{{ vcenter_username }}'
    password: '{{ vcenter_password }}'
    datacenter: '{{ vsphere_datacenter }}'
    cluster: "{{ wdc_info['instance']['hw_cluster'] }}"
    datastore: "{{ wdc_info['instance']['hw_datastores'][0] }}"
    name: '{{ inventory_hostname }}'
    template: '{{ win_temp }}'
    folder: "{{ wdc_info['instance']['hw_folder'] }}"
    validate_certs: 'no'
    networks:
    - name: '{{ Mgmt_network }}'
      ip: "{{ Mgmt_network_ipv4 }}"
      netmask: '{{ Mgmt_network_nmv4 }}'
      gateway: '{{ Mgmt_network_gwv4 }}'
      dns_servers:
        - '{{ dns_server1 }}'
        - '{{ dns_server2 }}'
    state: poweredon
    wait_for_ip_address: yes
    customization:
      hostname: "{{ vsphere_vm_hostname }}"
      dns_suffix: '{{ ad_domain }}'
      domainadmin: '{{ ad_domain_admin }}'
      domainadminpassword: '{{ ad_domain_password }}'
      joindomain: '{{ ad_domain }}'
      timezone: '{{ timezone }}'
    wait_for_customization: yes
  delegate_to: localhost
EXPECTED RESULTS
ACTUAL RESULTS
Udayendu commented 2 years ago

@goneri @Akasurde @Tomorrow9

Any thoughts around this ?

xmontagut commented 1 year ago

Same problem here, when using customization against a Windows template :

Very annoying, we are unable to use Ansible to deploy Woindows template atm. Seems to be still the same as https://github.com/ansible/ansible/issues/64774 ? I will try the suggested patch.

xmontagut commented 1 year ago

The patch from ansible issue 64774 works, the new VM now has a different MAC address, marked as "automatic" (not as manually attributed). I still have the issue about the NIC being disconnected at startup ; but about this problem, I can use vmware_guest_network afterwards.

andyeff commented 1 year ago

Hello, I am running into this problem today and am having trouble finding a workaround with the available modules. I am targeting vCenter / ESXi 6.7.0

I've tried using a customisation spec which is configured in vCenter to give a DHCP address, and then attempt to apply a static IP config afterwards, but this has also not worked.

With two VM's being created, one VM was given a new MAC address but then the NIC was left disconnected at power on. When connecting the NIC, the IP details specified in the config had not been applied.

The second VM retained the MAC address from the template it was cloned from. The NIC was showing as online, but no customisation was performed.

Does this affect newer vCenter versions - v7 or v8? The plan is to use v8 in production, but the available test rig is only at 6.7 for now.

edit: I am testing a possible workaround of limiting the vm-builder playbook to use serial: 1 to only attempt a single build at a time.

Zawullon commented 1 year ago

Hello. This bug does not depend on the version of vСenter. The only workaround I found was to patch the module and use it instead of the original one.

andyeff commented 1 year ago

Thanks Zawullon, I will give that a try.

I've so far spun up 4 VM's in sequential fashion with the serial: 1 playbook parameter. Fortunately this is working for my requirement as I do not need to build a lot of VM's in one go and singular builds will be fine, but I will have a go with a patched module as a parallel deployment would be preferable if I can get it working.

andyeff commented 1 year ago

I was able to create four VM's in one batch, with NIC customisations successfully applied, after making the changes in the patch referenced in the old ansible repository.

I've created a pull request linked to this issue to see if this helps implement the fix into the community modules here.

pingtouskar commented 1 year ago

Thanks @andyeff for the patch. Will test and update you.

For windows template, when I am making the template I am not adding any nic. And in that case there is no issue if I am trying to map multiple nics using ansible.

Bytesalat commented 11 months ago

vSphere: 7.x ansible: 2.13.11 community.vmware: 3.9.0

We are also affected by this issue and and would love to see this pull-request merged, but the automated checks seem stuck instead of failed. Is there a way to get this rolling again?

t106362512 commented 7 months ago

Also affected by this issue

Waiting for #1716 merge

chschenk commented 7 months ago

Hi,

we are also affected by this issue. It would be nice if https://github.com/ansible-collections/community.vmware/pull/1716 could be merged.

MugBuffalo commented 3 months ago

Hi,

we are also affected by this issue.

It would be awesome if the fix in https://github.com/ansible-collections/community.vmware/pull/1716 could be merged.