hashicorp / terraform-provider-vsphere

Terraform Provider for VMware vSphere
https://registry.terraform.io/providers/hashicorp/vsphere/
Mozilla Public License 2.0
622 stars 453 forks source link

Endless loop modifying `r/vsphere_distributed_virtual_switch` when `devices` are not in numerical order #1693

Open cmanzur opened 2 years ago

cmanzur commented 2 years ago

Community Guidelines

Terraform

v1.1.7

Terraform Provider

2.2.0

VMware vSphere

6.7

Description

I need a distributed_virtual_switch with:

resource "vsphere_distributed_virtual_switch" "switch_vms" {
  name                             = "DSwitch-environments"
  datacenter_id                    = var.vsphere_datacenter_id
  network_resource_control_enabled = true

  uplinks         = ["uplink1", "uplink2"]
  active_uplinks  = ["uplink1", "uplink2"]
  standby_uplinks = []

  dynamic "host" {
    for_each = var.vsphere_hosts
    content {
      devices        = ["vmnic10", "vmnic1"]
      host_system_id = host.value["id"]
    }
  }
}

resource "vsphere_hosts" "esxi" {
  count    = length(var.esxi_hosts)
  hostname = var.esxi_hosts[count.index]
  username = var.esxi_user
  password = var.esxi_password
  cluster  = data.vsphere_compute_cluster.pool.id
}

variable "esxi_hosts" {
  type = list(string)
  default = [
    "10.0.0.1", "10.0.0.2"
  ]
}

It works, but it always trigger a UPDATE, but there's no real changes.

terraform apply
...
Terraform will perform the following actions:

  # module.vmware_network.vsphere_distributed_virtual_switch.switch_vms will be updated in-place
  ~ resource "vsphere_distributed_virtual_switch" "switch_vms" {
        id                                = "50 31 b3 9f dd 9f bb 78-f6 14 a5 69 1d 9c 8d 9c"
        name                              = "DSwitch-environments"
        tags                              = []
        # (43 unchanged attributes hidden)

      - host {
          - devices        = [
              - "vmnic1",
              - "vmnic10",
            ] -> null
          - host_system_id = "host-28" -> null
        }
      - host {
          - devices        = [
              - "vmnic1",
              - "vmnic10",
            ] -> null
          - host_system_id = "host-4133" -> null
        }
      + host {
          + devices        = [
              + "vmnic10",
              + "vmnic1",
            ]
          + host_system_id = "host-28"
        }
      + host {
          + devices        = [
              + "vmnic10",
              + "vmnic1",
            ]
          + host_system_id = "host-4133"
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.
Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"
Plan: 0 to add, 1 to change, 0 to destroy.
module.vmware_network.vsphere_distributed_virtual_switch.switch_vms: Modifying... [id=50 31 b3 9f dd 9f bb 78-f6 14 a5 69 1d 9c 8d 9c]
module.vmware_network.vsphere_distributed_virtual_switch.switch_vms: Modifications complete after 1s [id=50 31 b3 9f dd 9f bb 78-f6 14 a5 69 1d 9c 8d 9c]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

If I run again terraform apply I get the same output: Plan: 0 to add, 1 to change, 0 to destroy.

Affected Resources

vsphere_distributed_virtual_switch

Terraform Configuration

No response

Debug Output

No response

Panic Output

No response

Expected Behavior

I expect: "No changes. Infrastructure up to date"

Actual Behavior

I always get: "Plan: 0 to add, 1 to change, 0 to destroy."

Steps to Reproduce

No response

Environment Details

No response

Screenshots

No response

References

No response

github-actions[bot] commented 2 years ago

Hello,   cmanzur ! 🖐

Thank you for submitting an issue for this provider. The issue will now enter into the issue lifecycle.

If you want to contribute to this project, please review the contributing guidelines and information on submitting pull requests.

tenthirtyam commented 2 years ago

Hi @cmanzur,

Do you get the same results if you replace the use of count with for_each in the vsphere_hosts resource block?

Ryan Johnson Staff II Solutions Architect | VMware, Inc.

cmanzur commented 2 years ago

Hello @tenthirtyam. I have tried with the dynamic block and also with two host blocks specifying each host_system_id. Same result. I'll try today with for_each and will tell you if this work. Thanks!

cmanzur commented 2 years ago

I had modified to:

resource "vsphere_hosts" "esxi" {
  for_each    = toset(var.esxi_hosts)
  hostname = each.key
  username = var.esxi_user
  password = var.esxi_password
  cluster  = data.vsphere_compute_cluster.pool.id
}

and I get the same problem. Even if I swap the vmnic names in the array, get the same problem.

tenthirtyam commented 2 years ago

@cmanzur - please provide a redacted version of your configuration plan for reproduction.

Ryan Johnson Senior Staff Solutions Architect | Product Engineering @ VMware, Inc.

tenthirtyam commented 2 years ago

Hi @cmanzur, 👋

There hasn't been an update with a redacted version of your configuration plan for a reprodiction.

In an effort to test the scenario, I created the following for reproduction:

terraform {
  required_providers {
    vsphere = {
      source  = "hashicorp/vsphere"
      version = "2.2.0"
    }
  }
  required_version = ">= 1.2.5"
}

variable "vsphere_server" {
  default = "m01-vc01.rainpole.io"
}

variable "vsphere_user" {
  default = "administrator@vsphere.local"
}

variable "vsphere_password" {
  default = "VMware1!"
}

variable "hosts" {
  default = [
    "n-esxi01.rainpole.io",
    "n-esxi02.rainpole.io"
  ]
}

provider "vsphere" {
  user                 = var.vsphere_user
  password             = var.vsphere_password
  vsphere_server       = var.vsphere_server
  allow_unverified_ssl = true
}

data "vsphere_datacenter" "datacenter" {
  name = "dc-01"
}

data "vsphere_compute_cluster" "cluster" {
  name = "cluster-01"
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

data "vsphere_host_thumbprint" "host" {
  count    = length(var.hosts)
  address  = var.hosts[count.index]
  insecure = true
}

data "vsphere_host" "host" {
  depends_on = [
    vsphere_host.host
  ]
  count         = length(var.hosts)
  name          = var.hosts[count.index]
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

resource "vsphere_host" "host" {
  count      = length(var.hosts)
  hostname   = var.hosts[count.index]
  username   = "root"
  password   = "VMware1!"
  thumbprint = data.vsphere_host_thumbprint.host[count.index].id
  cluster    = data.vsphere_compute_cluster.cluster.id
}

resource "vsphere_distributed_virtual_switch" "switch_vms" {
  name                             = "gh1693"
  datacenter_id                    = data.vsphere_datacenter.datacenter.id
  network_resource_control_enabled = true

  uplinks         = ["uplink1", "uplink2"]
  active_uplinks  = ["uplink1", "uplink2"]
  standby_uplinks = []

  dynamic "host" {
    for_each = var.hosts
    content {
      devices        = ["vmnic2", "vmnic3"]
      host_system_id = data.vsphere_host.host[host.key].id
    }
  }
}

Running this configuration where vmnic2 == uplink1 and vmnic3 == uplink2 works well.

❯ terraform apply -auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_host_thumbprint.host[1]: Reading...
data.vsphere_host_thumbprint.host[0]: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-99086]
data.vsphere_compute_cluster.cluster: Reading...
data.vsphere_host_thumbprint.host[0]: Read complete after 0s [id=A6:F5:F0:66:A8:5F:90:8C:B5:9F:D3:DC:82:4C:6A:21:A3:75:9F:97]
data.vsphere_host_thumbprint.host[1]: Read complete after 0s [id=58:FD:B4:E6:06:F1:A7:F3:E2:A7:C3:C9:F0:E7:F7:07:F2:34:21:27]
data.vsphere_compute_cluster.cluster: Read complete after 0s [id=domain-c99091]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

  # data.vsphere_host.host[0] will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "vsphere_host" "host" {
      + datacenter_id    = "datacenter-99086"
      + id               = (known after apply)
      + name             = "n-esxi01.rainpole.io"
      + resource_pool_id = (known after apply)
    }

  # data.vsphere_host.host[1] will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "vsphere_host" "host" {
      + datacenter_id    = "datacenter-99086"
      + id               = (known after apply)
      + name             = "n-esxi02.rainpole.io"
      + resource_pool_id = (known after apply)
    }

  # vsphere_distributed_virtual_switch.switch_vms will be created
  + resource "vsphere_distributed_virtual_switch" "switch_vms" {
      + active_uplinks                    = [
          + "uplink1",
          + "uplink2",
        ]
      + allow_forged_transmits            = (known after apply)
      + allow_mac_changes                 = (known after apply)
      + allow_promiscuous                 = (known after apply)
      + backupnfc_maximum_mbit            = (known after apply)
      + backupnfc_reservation_mbit        = (known after apply)
      + backupnfc_share_count             = (known after apply)
      + backupnfc_share_level             = (known after apply)
      + block_all_ports                   = (known after apply)
      + check_beacon                      = (known after apply)
      + config_version                    = (known after apply)
      + datacenter_id                     = "datacenter-99086"
      + directpath_gen2_allowed           = (known after apply)
      + egress_shaping_average_bandwidth  = (known after apply)
      + egress_shaping_burst_size         = (known after apply)
      + egress_shaping_enabled            = (known after apply)
      + egress_shaping_peak_bandwidth     = (known after apply)
      + failback                          = (known after apply)
      + faulttolerance_maximum_mbit       = (known after apply)
      + faulttolerance_reservation_mbit   = (known after apply)
      + faulttolerance_share_count        = (known after apply)
      + faulttolerance_share_level        = (known after apply)
      + hbr_maximum_mbit                  = (known after apply)
      + hbr_reservation_mbit              = (known after apply)
      + hbr_share_count                   = (known after apply)
      + hbr_share_level                   = (known after apply)
      + id                                = (known after apply)
      + ignore_other_pvlan_mappings       = false
      + ingress_shaping_average_bandwidth = (known after apply)
      + ingress_shaping_burst_size        = (known after apply)
      + ingress_shaping_enabled           = (known after apply)
      + ingress_shaping_peak_bandwidth    = (known after apply)
      + iscsi_maximum_mbit                = (known after apply)
      + iscsi_reservation_mbit            = (known after apply)
      + iscsi_share_count                 = (known after apply)
      + iscsi_share_level                 = (known after apply)
      + lacp_api_version                  = (known after apply)
      + lacp_enabled                      = (known after apply)
      + lacp_mode                         = (known after apply)
      + link_discovery_operation          = "listen"
      + link_discovery_protocol           = "cdp"
      + management_maximum_mbit           = (known after apply)
      + management_reservation_mbit       = (known after apply)
      + management_share_count            = (known after apply)
      + management_share_level            = (known after apply)
      + max_mtu                           = (known after apply)
      + multicast_filtering_mode          = (known after apply)
      + name                              = "gh1693"
      + netflow_active_flow_timeout       = 60
      + netflow_enabled                   = (known after apply)
      + netflow_idle_flow_timeout         = 15
      + network_resource_control_enabled  = true
      + network_resource_control_version  = (known after apply)
      + nfs_maximum_mbit                  = (known after apply)
      + nfs_reservation_mbit              = (known after apply)
      + nfs_share_count                   = (known after apply)
      + nfs_share_level                   = (known after apply)
      + notify_switches                   = (known after apply)
      + port_private_secondary_vlan_id    = (known after apply)
      + standby_uplinks                   = []
      + teaming_policy                    = (known after apply)
      + tx_uplink                         = (known after apply)
      + uplinks                           = [
          + "uplink1",
          + "uplink2",
        ]
      + vdp_maximum_mbit                  = (known after apply)
      + vdp_reservation_mbit              = (known after apply)
      + vdp_share_count                   = (known after apply)
      + vdp_share_level                   = (known after apply)
      + version                           = (known after apply)
      + virtualmachine_maximum_mbit       = (known after apply)
      + virtualmachine_reservation_mbit   = (known after apply)
      + virtualmachine_share_count        = (known after apply)
      + virtualmachine_share_level        = (known after apply)
      + vlan_id                           = (known after apply)
      + vmotion_maximum_mbit              = (known after apply)
      + vmotion_reservation_mbit          = (known after apply)
      + vmotion_share_count               = (known after apply)
      + vmotion_share_level               = (known after apply)
      + vsan_maximum_mbit                 = (known after apply)
      + vsan_reservation_mbit             = (known after apply)
      + vsan_share_count                  = (known after apply)
      + vsan_share_level                  = (known after apply)

      + host {
          + devices        = [
              + "vmnic2",
              + "vmnic3",
            ]
          + host_system_id = (known after apply)
        }
      + host {
          + devices        = [
              + "vmnic2",
              + "vmnic3",
            ]
          + host_system_id = (known after apply)
        }

      + vlan_range {
          + max_vlan = (known after apply)
          + min_vlan = (known after apply)
        }
    }

  # vsphere_host.host[0] will be created
  + resource "vsphere_host" "host" {
      + cluster     = "domain-c99091"
      + connected   = true
      + force       = false
      + hostname    = "n-esxi01.rainpole.io"
      + id          = (known after apply)
      + lockdown    = "disabled"
      + maintenance = false
      + password    = (sensitive value)
      + thumbprint  = "A6:F5:F0:66:A8:5F:90:8C:B5:9F:D3:DC:82:4C:6A:21:A3:75:9F:97"
      + username    = "root"
    }

  # vsphere_host.host[1] will be created
  + resource "vsphere_host" "host" {
      + cluster     = "domain-c99091"
      + connected   = true
      + force       = false
      + hostname    = "n-esxi02.rainpole.io"
      + id          = (known after apply)
      + lockdown    = "disabled"
      + maintenance = false
      + password    = (sensitive value)
      + thumbprint  = "58:FD:B4:E6:06:F1:A7:F3:E2:A7:C3:C9:F0:E7:F7:07:F2:34:21:27"
      + username    = "root"
    }

Plan: 3 to add, 0 to change, 0 to destroy.
vsphere_host.host[1]: Creating...
vsphere_host.host[0]: Creating...
vsphere_host.host[0]: Still creating... [10s elapsed]
vsphere_host.host[1]: Still creating... [10s elapsed]
vsphere_host.host[0]: Creation complete after 12s [id=host-102041]
vsphere_host.host[1]: Creation complete after 13s [id=host-102040]
data.vsphere_host.host[0]: Reading...
data.vsphere_host.host[1]: Reading...
data.vsphere_host.host[1]: Read complete after 0s [id=host-102040]
data.vsphere_host.host[0]: Read complete after 0s [id=host-102041]
vsphere_distributed_virtual_switch.switch_vms: Creating...
vsphere_distributed_virtual_switch.switch_vms: Creation complete after 0s [id=50 02 19 93 66 a1 d8 77-84 07 d3 b1 33 53 56 41]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
❯ terraform apply -auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_host_thumbprint.host[0]: Reading...
data.vsphere_host_thumbprint.host[1]: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-99086]
data.vsphere_compute_cluster.cluster: Reading...
data.vsphere_host_thumbprint.host[1]: Read complete after 0s [id=58:FD:B4:E6:06:F1:A7:F3:E2:A7:C3:C9:F0:E7:F7:07:F2:34:21:27]
data.vsphere_host_thumbprint.host[0]: Read complete after 0s [id=A6:F5:F0:66:A8:5F:90:8C:B5:9F:D3:DC:82:4C:6A:21:A3:75:9F:97]
data.vsphere_compute_cluster.cluster: Read complete after 0s [id=domain-c99091]
vsphere_host.host[0]: Refreshing state... [id=host-102041]
vsphere_host.host[1]: Refreshing state... [id=host-102040]
data.vsphere_host.host[0]: Reading...
data.vsphere_host.host[1]: Reading...
data.vsphere_host.host[1]: Read complete after 0s [id=host-102040]
data.vsphere_host.host[0]: Read complete after 0s [id=host-102041]
vsphere_distributed_virtual_switch.switch_vms: Refreshing state... [id=50 02 19 93 66 a1 d8 77-84 07 d3 b1 33 53 56 41]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no
changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
❯ terraform apply -auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_host_thumbprint.host[1]: Reading...
data.vsphere_host_thumbprint.host[0]: Reading...
data.vsphere_datacenter.datacenter: Read complete after 1s [id=datacenter-99086]
data.vsphere_host_thumbprint.host[0]: Read complete after 1s [id=A6:F5:F0:66:A8:5F:90:8C:B5:9F:D3:DC:82:4C:6A:21:A3:75:9F:97]
data.vsphere_host_thumbprint.host[1]: Read complete after 1s [id=58:FD:B4:E6:06:F1:A7:F3:E2:A7:C3:C9:F0:E7:F7:07:F2:34:21:27]
data.vsphere_compute_cluster.cluster: Reading...
data.vsphere_compute_cluster.cluster: Read complete after 0s [id=domain-c99091]
vsphere_host.host[0]: Refreshing state... [id=host-102041]
vsphere_host.host[1]: Refreshing state... [id=host-102040]
data.vsphere_host.host[1]: Reading...
data.vsphere_host.host[0]: Reading...
data.vsphere_host.host[0]: Read complete after 0s [id=host-102041]
data.vsphere_host.host[1]: Read complete after 0s [id=host-102040]
vsphere_distributed_virtual_switch.switch_vms: Refreshing state... [id=50 02 19 93 66 a1 d8 77-84 07 d3 b1 33 53 56 41]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no
changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

However, if I destroy the configuration and flip the configuration:

resource "vsphere_distributed_virtual_switch" "switch_vms" {
  # ... excluded for brevity ...
  dynamic "host" {
    for_each = var.hosts
    content {
      devices        = ["vmnic3", "vmnic2"]
      host_system_id = data.vsphere_host.host[host.key].id
    }
  }
  # ... excluded for brevity ...
}

vmnic2 == uplink1 and vmnic3 == uplink2 >>> vmnic3 == uplink1 and vmnic3 == uplink2 the same error you saw will be observed.

❯ terraform apply -auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_host_thumbprint.host[0]: Reading...
data.vsphere_host_thumbprint.host[1]: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-99086]
data.vsphere_compute_cluster.cluster: Reading...
data.vsphere_host_thumbprint.host[0]: Read complete after 0s [id=A6:F5:F0:66:A8:5F:90:8C:B5:9F:D3:DC:82:4C:6A:21:A3:75:9F:97]
data.vsphere_host_thumbprint.host[1]: Read complete after 0s [id=58:FD:B4:E6:06:F1:A7:F3:E2:A7:C3:C9:F0:E7:F7:07:F2:34:21:27]
data.vsphere_compute_cluster.cluster: Read complete after 0s [id=domain-c99091]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

  # data.vsphere_host.host[0] will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "vsphere_host" "host" {
      + datacenter_id    = "datacenter-99086"
      + id               = (known after apply)
      + name             = "n-esxi01.rainpole.io"
      + resource_pool_id = (known after apply)
    }

  # data.vsphere_host.host[1] will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "vsphere_host" "host" {
      + datacenter_id    = "datacenter-99086"
      + id               = (known after apply)
      + name             = "n-esxi02.rainpole.io"
      + resource_pool_id = (known after apply)
    }

  # vsphere_distributed_virtual_switch.switch_vms will be created
  + resource "vsphere_distributed_virtual_switch" "switch_vms" {
      + active_uplinks                    = [
          + "uplink1",
          + "uplink2",
        ]
      + allow_forged_transmits            = (known after apply)
      + allow_mac_changes                 = (known after apply)
      + allow_promiscuous                 = (known after apply)
      + backupnfc_maximum_mbit            = (known after apply)
      + backupnfc_reservation_mbit        = (known after apply)
      + backupnfc_share_count             = (known after apply)
      + backupnfc_share_level             = (known after apply)
      + block_all_ports                   = (known after apply)
      + check_beacon                      = (known after apply)
      + config_version                    = (known after apply)
      + datacenter_id                     = "datacenter-99086"
      + directpath_gen2_allowed           = (known after apply)
      + egress_shaping_average_bandwidth  = (known after apply)
      + egress_shaping_burst_size         = (known after apply)
      + egress_shaping_enabled            = (known after apply)
      + egress_shaping_peak_bandwidth     = (known after apply)
      + failback                          = (known after apply)
      + faulttolerance_maximum_mbit       = (known after apply)
      + faulttolerance_reservation_mbit   = (known after apply)
      + faulttolerance_share_count        = (known after apply)
      + faulttolerance_share_level        = (known after apply)
      + hbr_maximum_mbit                  = (known after apply)
      + hbr_reservation_mbit              = (known after apply)
      + hbr_share_count                   = (known after apply)
      + hbr_share_level                   = (known after apply)
      + id                                = (known after apply)
      + ignore_other_pvlan_mappings       = false
      + ingress_shaping_average_bandwidth = (known after apply)
      + ingress_shaping_burst_size        = (known after apply)
      + ingress_shaping_enabled           = (known after apply)
      + ingress_shaping_peak_bandwidth    = (known after apply)
      + iscsi_maximum_mbit                = (known after apply)
      + iscsi_reservation_mbit            = (known after apply)
      + iscsi_share_count                 = (known after apply)
      + iscsi_share_level                 = (known after apply)
      + lacp_api_version                  = (known after apply)
      + lacp_enabled                      = (known after apply)
      + lacp_mode                         = (known after apply)
      + link_discovery_operation          = "listen"
      + link_discovery_protocol           = "cdp"
      + management_maximum_mbit           = (known after apply)
      + management_reservation_mbit       = (known after apply)
      + management_share_count            = (known after apply)
      + management_share_level            = (known after apply)
      + max_mtu                           = (known after apply)
      + multicast_filtering_mode          = (known after apply)
      + name                              = "gh1693"
      + netflow_active_flow_timeout       = 60
      + netflow_enabled                   = (known after apply)
      + netflow_idle_flow_timeout         = 15
      + network_resource_control_enabled  = true
      + network_resource_control_version  = (known after apply)
      + nfs_maximum_mbit                  = (known after apply)
      + nfs_reservation_mbit              = (known after apply)
      + nfs_share_count                   = (known after apply)
      + nfs_share_level                   = (known after apply)
      + notify_switches                   = (known after apply)
      + port_private_secondary_vlan_id    = (known after apply)
      + standby_uplinks                   = []
      + teaming_policy                    = (known after apply)
      + tx_uplink                         = (known after apply)
      + uplinks                           = [
          + "uplink1",
          + "uplink2",
        ]
      + vdp_maximum_mbit                  = (known after apply)
      + vdp_reservation_mbit              = (known after apply)
      + vdp_share_count                   = (known after apply)
      + vdp_share_level                   = (known after apply)
      + version                           = (known after apply)
      + virtualmachine_maximum_mbit       = (known after apply)
      + virtualmachine_reservation_mbit   = (known after apply)
      + virtualmachine_share_count        = (known after apply)
      + virtualmachine_share_level        = (known after apply)
      + vlan_id                           = (known after apply)
      + vmotion_maximum_mbit              = (known after apply)
      + vmotion_reservation_mbit          = (known after apply)
      + vmotion_share_count               = (known after apply)
      + vmotion_share_level               = (known after apply)
      + vsan_maximum_mbit                 = (known after apply)
      + vsan_reservation_mbit             = (known after apply)
      + vsan_share_count                  = (known after apply)
      + vsan_share_level                  = (known after apply)

      + host {
          + devices        = [
              + "vmnic3",
              + "vmnic2",
            ]
          + host_system_id = (known after apply)
        }
      + host {
          + devices        = [
              + "vmnic3",
              + "vmnic2",
            ]
          + host_system_id = (known after apply)
        }

      + vlan_range {
          + max_vlan = (known after apply)
          + min_vlan = (known after apply)
        }
    }

  # vsphere_host.host[0] will be created
  + resource "vsphere_host" "host" {
      + cluster     = "domain-c99091"
      + connected   = true
      + force       = false
      + hostname    = "n-esxi01.rainpole.io"
      + id          = (known after apply)
      + lockdown    = "disabled"
      + maintenance = false
      + password    = (sensitive value)
      + thumbprint  = "A6:F5:F0:66:A8:5F:90:8C:B5:9F:D3:DC:82:4C:6A:21:A3:75:9F:97"
      + username    = "root"
    }

  # vsphere_host.host[1] will be created
  + resource "vsphere_host" "host" {
      + cluster     = "domain-c99091"
      + connected   = true
      + force       = false
      + hostname    = "n-esxi02.rainpole.io"
      + id          = (known after apply)
      + lockdown    = "disabled"
      + maintenance = false
      + password    = (sensitive value)
      + thumbprint  = "58:FD:B4:E6:06:F1:A7:F3:E2:A7:C3:C9:F0:E7:F7:07:F2:34:21:27"
      + username    = "root"
    }

Plan: 3 to add, 0 to change, 0 to destroy.
vsphere_host.host[0]: Creating...
vsphere_host.host[1]: Creating...
vsphere_host.host[0]: Creation complete after 10s [id=host-102051]
vsphere_host.host[1]: Still creating... [10s elapsed]
vsphere_host.host[1]: Creation complete after 11s [id=host-102052]
data.vsphere_host.host[0]: Reading...
data.vsphere_host.host[1]: Reading...
data.vsphere_host.host[0]: Read complete after 0s [id=host-102051]
data.vsphere_host.host[1]: Read complete after 0s [id=host-102052]
vsphere_distributed_virtual_switch.switch_vms: Creating...
vsphere_distributed_virtual_switch.switch_vms: Creation complete after 1s [id=50 02 16 bb d6 71 39 31-8e 07 67 1e 31 09 2b 54]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
❯ terraform apply -auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_host_thumbprint.host[1]: Reading...
data.vsphere_host_thumbprint.host[0]: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-99086]
data.vsphere_compute_cluster.cluster: Reading...
data.vsphere_host_thumbprint.host[0]: Read complete after 0s [id=A6:F5:F0:66:A8:5F:90:8C:B5:9F:D3:DC:82:4C:6A:21:A3:75:9F:97]
data.vsphere_host_thumbprint.host[1]: Read complete after 0s [id=58:FD:B4:E6:06:F1:A7:F3:E2:A7:C3:C9:F0:E7:F7:07:F2:34:21:27]
data.vsphere_compute_cluster.cluster: Read complete after 0s [id=domain-c99091]
vsphere_host.host[0]: Refreshing state... [id=host-102051]
vsphere_host.host[1]: Refreshing state... [id=host-102052]
data.vsphere_host.host[1]: Reading...
data.vsphere_host.host[0]: Reading...
data.vsphere_host.host[0]: Read complete after 0s [id=host-102051]
data.vsphere_host.host[1]: Read complete after 0s [id=host-102052]
vsphere_distributed_virtual_switch.switch_vms: Refreshing state... [id=50 02 16 bb d6 71 39 31-8e 07 67 1e 31 09 2b 54]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_distributed_virtual_switch.switch_vms will be updated in-place
  ~ resource "vsphere_distributed_virtual_switch" "switch_vms" {
        id                                = "50 02 16 bb d6 71 39 31-8e 07 67 1e 31 09 2b 54"
        name                              = "gh1693"
        tags                              = []
        # (83 unchanged attributes hidden)

      - host {
          - devices        = [
              - "vmnic2",
              - "vmnic3",
            ] -> null
          - host_system_id = "host-102051" -> null
        }
      - host {
          - devices        = [
              - "vmnic2",
              - "vmnic3",
            ] -> null
          - host_system_id = "host-102052" -> null
        }
      + host {
          + devices        = [
              + "vmnic3",
              + "vmnic2",
            ]
          + host_system_id = "host-102051"
        }
      + host {
          + devices        = [
              + "vmnic3",
              + "vmnic2",
            ]
          + host_system_id = "host-102052"
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.
vsphere_distributed_virtual_switch.switch_vms: Modifying... [id=50 02 16 bb d6 71 39 31-8e 07 67 1e 31 09 2b 54]
vsphere_distributed_virtual_switch.switch_vms: Modifications complete after 0s [id=50 02 16 bb d6 71 39 31-8e 07 67 1e 31 09 2b 54]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
❯ terraform apply -auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_host_thumbprint.host[1]: Reading...
data.vsphere_host_thumbprint.host[0]: Reading...
data.vsphere_host_thumbprint.host[0]: Read complete after 0s [id=A6:F5:F0:66:A8:5F:90:8C:B5:9F:D3:DC:82:4C:6A:21:A3:75:9F:97]
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-99086]
data.vsphere_compute_cluster.cluster: Reading...
data.vsphere_host_thumbprint.host[1]: Read complete after 0s [id=58:FD:B4:E6:06:F1:A7:F3:E2:A7:C3:C9:F0:E7:F7:07:F2:34:21:27]
data.vsphere_compute_cluster.cluster: Read complete after 0s [id=domain-c99091]
vsphere_host.host[1]: Refreshing state... [id=host-102052]
vsphere_host.host[0]: Refreshing state... [id=host-102051]
data.vsphere_host.host[1]: Reading...
data.vsphere_host.host[0]: Reading...
data.vsphere_host.host[1]: Read complete after 1s [id=host-102052]
data.vsphere_host.host[0]: Read complete after 1s [id=host-102051]
vsphere_distributed_virtual_switch.switch_vms: Refreshing state... [id=50 02 16 bb d6 71 39 31-8e 07 67 1e 31 09 2b 54]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_distributed_virtual_switch.switch_vms will be updated in-place
  ~ resource "vsphere_distributed_virtual_switch" "switch_vms" {
        id                                = "50 02 16 bb d6 71 39 31-8e 07 67 1e 31 09 2b 54"
        name                              = "gh1693"
        tags                              = []
        # (83 unchanged attributes hidden)

      - host {
          - devices        = [
              - "vmnic2",
              - "vmnic3",
            ] -> null
          - host_system_id = "host-102051" -> null
        }
      + host {
          + devices        = [
              + "vmnic3",
              + "vmnic2",
            ]
          + host_system_id = "host-102051"
        }

        # (1 unchanged block hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.
vsphere_distributed_virtual_switch.switch_vms: Modifying... [id=50 02 16 bb d6 71 39 31-8e 07 67 1e 31 09 2b 54]
vsphere_distributed_virtual_switch.switch_vms: Modifications complete after 0s [id=50 02 16 bb d6 71 39 31-8e 07 67 1e 31 09 2b 54]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

The issue appears to be related to an expected ordering and needs further investigation.

So, in your issue, one workaround would be to assign:

Ryan Johnson Senior Staff Solutions Architect | Product Engineering @ VMware, Inc.

tenthirtyam commented 2 years ago

It's rather likely that the issue is related to the use of TypeList vs Typeset for the devices and uplinks, which could introduce a breaking change to the provider.

https://github.com/hashicorp/terraform-provider-vsphere/blob/f25c8f297f410da9d225ff87ce7eaf146afb6d11/vsphere/distributed_virtual_switch_structure.go#L79-L84

https://github.com/hashicorp/terraform-provider-vsphere/blob/f25c8f297f410da9d225ff87ce7eaf146afb6d11/vsphere/distributed_virtual_switch_structure.go#L191-L197

It's a very subtle problem and often up for debate, but mainly users who previously relied on the fact it was TypeList COULD use the syntax as follows:

resource "thing" "ex" {
    name = something.typelist_attr.0.name
}

Lists are indexed by a number. Sets on the other hand are indexed by a hash value that is calculated by Terraform (based on the data of the element in the set)

resource "thing" "ex" {
    name = something.typeset_attr.876543.name
}

Users with the original list-based config would now see an error along the lines of no item in typelist_attr with key 0 or something like that. Figuring out the hash value is nontrivial (as the state file saves the data as a JSON array even for TypeSet, the hash needs to be manually calculated to figure it out), so users would have no practical way forward.

Conversely, we are relatively confident users almost never reference set items individually by the hash index, so we do consider TypeSet --> TypeList a non-breaking change.

For this issue, the workaround would be to assign:

cc @appilon

Ryan Johnson Senior Staff Solutions Architect | Product Engineering @ VMware, Inc.