hashicorp / terraform-provider-vsphere

Terraform Provider for VMware vSphere
https://registry.terraform.io/providers/hashicorp/vsphere/
Mozilla Public License 2.0
608 stars 448 forks source link

OVF template import fails when network is not a portgroup on standard switch #2103

Open msyretis opened 6 months ago

msyretis commented 6 months ago

Community Guidelines

Terraform

1.6.6

Terraform Provider

2.5.1 and 2.6.1

VMware vSphere

vcentre v8.0.2 esxi 7.0.3

Description

For resource data "vsphere_ovf_vm_template" "nsx-ova" when the ovf_network_map is a portgroup on a standard switch the planning works as expected, but fails when the ovf_network_map is a portgroup on a DVS

Affected Resources or Data Sources

fails:

data "vsphere_ovf_vm_template" "nsx-ova" {
  name              = "nsx-manager-ova"
  disk_provisioning = "thin"
  resource_pool_id  = data.vsphere_host.management_host.resource_pool_id
  datastore_id      = data.vsphere_datastore.datastore_template.id
  host_system_id    = data.vsphere_host.management_host.id
  remote_ovf_url    = var.ovf_path # remote is when the path is via http
  ovf_network_map = {
    "VM Network" : vsphere_distributed_port_group.gb1-sdn-management.id
  }
}

succeeds:

# data "vsphere_ovf_vm_template" "nsx-ova" {
#   name              = "nsx-manager-ova"
#   disk_provisioning = "thin"
#   resource_pool_id  = data.vsphere_host.template_host.resource_pool_id
#   datastore_id      = data.vsphere_datastore.datastore_template.id
#   host_system_id    = data.vsphere_host.template_host.id
#   remote_ovf_url    = var.ovf_path # remote is when the path is via http
#   ovf_network_map = {
#     "VM Network" : data.vsphere_network.template_network.id
#   }
# }
data "vsphere_network" "template_network" {
  name          = var.template_network
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

resource "vsphere_distributed_port_group" "gb1-sdn-management" {
  name                            = var.mgmt_network
  distributed_virtual_switch_uuid = data.vsphere_distributed_virtual_switch.vDS1.id
  auto_expand                     = "false"
  block_override_allowed          = "true"
  port_config_reset_at_disconnect = "true"
  type                            = "ephemeral"
  vlan_id                         = "2124"
}

Terraform Configuration

Ultimately after the OVA is described, three VMs are spawned

## Deployment of VM from Remote OVF
# resource "vsphere_virtual_machine" "nsxtmanager" {
#   depends_on           = [vsphere_folder.nsxt_vm_folder]
#   for_each             = { for name in var.nsx-managers : name.name => name }
#   name                 = each.key
#   datastore_id         = data.vsphere_datastore.datastores[each.value.datastore].id
#   datacenter_id        = data.vsphere_datacenter.datacenter.id
#   host_system_id       = data.vsphere_host.management_host.id
#   resource_pool_id     = data.vsphere_host.management_host.resource_pool_id
#   num_cpus             = data.vsphere_ovf_vm_template.nsx-ova.num_cpus
#   num_cores_per_socket = data.vsphere_ovf_vm_template.nsx-ova.num_cores_per_socket
#   memory               = data.vsphere_ovf_vm_template.nsx-ova.memory
#   guest_id             = data.vsphere_ovf_vm_template.nsx-ova.guest_id
#   scsi_type            = data.vsphere_ovf_vm_template.nsx-ova.scsi_type
#   ept_rvi_mode         = "automatic"
#   hv_mode              = "hvAuto"
#   ####################################################################################
#   # nsx managers already have ntp defined, 
#   # if sync_time_with_host_periodically is defined as false you keep getting change-in-place because the VM keeps falling back to true  
#   # if sync_time_with_host_periodically is defined as true the ova deployment seems to override it still    
#   # I ended up overriding this with in the lifecycle stanza
#   ####################################################################################
#   sync_time_with_host_periodically = true
#   folder                           = var.nsxt_vm_folder
#   network_interface {
#     network_id = vsphere_distributed_port_group.gb1-sdn-management.id
#   }

#   wait_for_guest_net_timeout = 0
#   wait_for_guest_ip_timeout  = 0

#   ovf_deploy {
#     allow_unverified_ssl_cert = true
#     remote_ovf_url            = var.ovf_path
#     disk_provisioning         = data.vsphere_ovf_vm_template.nsx-ova.disk_provisioning
#     ovf_network_map           = data.vsphere_ovf_vm_template.nsx-ova.ovf_network_map
#     deployment_option         = var.deployment_option
#   }

#   vapp {
#     properties = {
#       nsx_allowSSHRootLogin  = each.value.nsx_allowSSHRootLogin
#       nsx_passwd_0           = var.nsx_password
#       nsx_cli_audit_passwd_0 = var.nsx_password_audit
#       nsx_cli_passwd_0       = var.nsx_password
#       nsx_dns1_0             = each.value.nsx_dns1_0
#       nsx_domain_0           = each.value.nsx_domain_0
#       nsx_gateway_0          = each.value.nsx_gateway_0
#       nsx_hostname           = each.value.nsx_hostname
#       nsx_ip_0               = each.value.nsx_ip_0
#       nsx_isSSHEnabled       = each.value.nsx_isSSHEnabled
#       nsx_netmask_0          = each.value.nsx_netmask_0
#       nsx_ntp_0              = each.value.nsx_ntp_0
#       nsx_role               = each.value.nsx_role
#     }
#   }

#   lifecycle {
#     # ignore_changes =  all 
#     ignore_changes = [
#       disk,
#       host_system_id,
#       num_cores_per_socket,
#       ept_rvi_mode,
#       hv_mode,
#       sync_time_with_host_periodically,
#       ovf_deploy,
#       vapp[0].properties["nsx_role"],
#       vapp[0].properties["nsx_cli_audit_passwd_0"],
#       vapp[0].properties["nsx_cli_passwd_0"],
#       vapp[0].properties["nsx_passwd_0"],
#     ]
#   }
# }

the problem is that none of the hosts in the mgmt cluster have a standard switch, I tried to split the template and the VMs to different clusters/hosts but that fails again.

Debug Output

link to outputs:

https://gist.github.com/msyretis/1904b12f4655f43ac65493a91cb13679

Panic Output

No response

Expected Behavior

I would love to be able to have the template on a distributed switch, or at least have terraform understand that the ovf/ova template can be pulled to a different host to where the VM would be eventually deployed.

Actual Behavior

Fails with different error depending on the scenario. See the outputs in the gist

Steps to Reproduce

create a template and point the network map to a DVS backed PG. Split the template and the VM to different hosts.

Environment Details

No response

Screenshots

No response

References

No response

github-actions[bot] commented 6 months ago

Hello, msyretis! 🖐

Thank you for submitting an issue for this provider. The issue will now enter into the issue lifecycle.

If you want to contribute to this project, please review the contributing guidelines and information on submitting pull requests.

msyretis commented 6 months ago

A second thing I just noticed, the ovf template and the VM you spawn, if you try to use different networks, the vm is deployed with what the template is deployed:

      + ovf_deploy {
          + allow_unverified_ssl_cert = true
          + deployment_option         = "large"
          + disk_provisioning         = "thin"
          + enable_hidden_properties  = false
          + ovf_network_map           = {
              + "VM Network" = "network-117418"
            }
          + remote_ovf_url            = "[http://<somefqdn>/nsx-unified-appliance-4.1.2.1.0.22667794.ova"](http://<somefqdn>/nsx-unified-appliance-4.1.2.1.0.22667794.ova%22)
        }
# data.vsphere_network.mgmt_network:
data "vsphere_network" "mgmt_network" {
    datacenter_id                   = "datacenter-21"
    distributed_virtual_switch_uuid = "50 09 09 87 b5 3b 3f 4a-61 fa 56 74 89 0b 1f bd"
    id                              = "dvportgroup-117407"
    name                            = "gb1-sdn-management"
    type                            = "DistributedVirtualPortgroup"
}

for the three VMs I am using a DVS PG, 
  network_interface {
    network_id = data.vsphere_network.mgmt_network.id
  }

while for the ovf I have to use a dummy PG on a dummy standard vswitch

  ovf_network_map = {
    "VM Network" : data.vsphere_network.template_network.id
  }