hashicorp / terraform-provider-vsphere

Terraform Provider for VMware vSphere
https://registry.terraform.io/providers/hashicorp/vsphere/
Mozilla Public License 2.0
609 stars 448 forks source link

Plan/Apply shows wrong disk values and created clone with wrong disk values #2191

Open thesefer opened 2 months ago

thesefer commented 2 months ago

Community Guidelines

Terraform

Terraform v1.4.2

Terraform Provider

v2.0.2

VMware vSphere

8.0.2.00200

Description

The dynamic disk0 is wrongly created as thin despite stating eagerly_scrub: true, thin_provisioned: false. This fails consecutive runs. Persistent / attached dynamic disk1 shows a wrong eagerly_scrub: false, thin_provisioned: true but is created correctly as eagerZeroedThick

The template was created with packer with a thin disk. Creating the VM manually "Deploy from Template -> VM Templates" and setting "Select virtual disk format" to i.e. "Thick Provision Eager Zeroed" properly creates the disk.

If the whole template is needed, please let me know.

Affected Resources or Data Sources

resource/vsphere_virtual_disk resource/vsphere_virtual_machine

Terraform Configuration

resource "vsphere_virtual_disk" "PersistentDataDisk" {
  count                   = var.persistent_data_disk == true ? length(var.vms) : 0
  size                  = var.vms[count.index].flavor.disks[1].size
  datacenter            = var.datacenter
  vmdk_path             = "${var.vms[count.index].general.vm_name} - PersistentDisk/${var.vms[count.index].general.host_name}_persistent.vmdk"
  datastore             = var.vms[count.index].general.datastore
  type                  = var.vms[count.index].flavor.disk_type == "thin" ? "thin" : (var.vms[count.index].flavor.disk_type == "eagerZeroedThick" ? "eagerZeroedThick" : "lazy")
  create_directories    = true
}

resource "vsphere_virtual_machine" "vms" {
  depends_on       = [data.vsphere_virtual_machine.template, vsphere_virtual_disk.PersistentDataDisk]
  count              = length(var.vms)
  host_system_id   = data.vsphere_host.esxi_hosts[count.index].id
  name             = var.vms[count.index].general.vm_name
  resource_pool_id = data.vsphere_compute_cluster.compute_cluster[count.index].resource_pool_id
  datastore_id     = data.vsphere_datastore.datastores[count.index].id
  annotation         = var.vms[count.index].custom_attr.annotation
  firmware         = var.firmware
  folder           = var.vms[count.index].location.folder_name
  enable_disk_uuid = true

  custom_attributes = tomap({
    (data.vsphere_custom_attribute.admin_contact.id) = (var.vms[count.index].custom_attr.admin_contact)
    (data.vsphere_custom_attribute.owner_name.id) = (var.vms[count.index].custom_attr.owner_name)
  })

  num_cpus = var.vms[count.index].flavor.cpu
  memory   = var.vms[count.index].flavor.ram
  guest_id = var.vms[count.index].general.guest_id

  dynamic "network_interface" {
    for_each = data.vsphere_network.networks
    content {
      network_id = network_interface.value.id
    }
  }

  dynamic disk {
    for_each = [for i in var.vms[count.index].flavor.disks: { 
      size   = i.size
      number = i.number
      attach = var.persistent_data_disk == true ? true : false
      path   = "${var.vms[count.index].general.vm_name} - PersistentDisk/${var.vms[count.index].general.host_name}_persistent.vmdk"
    }]
    content {
      label = "disk${disk.value.number}"
      unit_number = disk.value.number
      size = (disk.value.attach == true && disk.value.number != 1) || disk.value.attach == false ? disk.value.size : null
      // Thick Provision Lazy Zeroed -> eagerly_scrub & thin_provisioned: false
      // Thick Provision Eager Zeroed -> eagerly_scrub: true, thin_provisioned: false
      // Thin Provision -> eagerly_scrub: false, thin_provisioned: true
      eagerly_scrub = (disk.value.attach == true && disk.value.number != 1) || disk.value.attach == false ? var.vms[count.index].flavor.disk_type == "thin" ? false : (var.vms[count.index].flavor.disk_type == "eagerZeroedThick" ? true : false) : null
      thin_provisioned = (disk.value.attach == true && disk.value.number != 1) || disk.value.attach == false ? var.vms[count.index].flavor.disk_type == "thin" ? true : (var.vms[count.index].flavor.disk_type == "eagerZeroedThick" ? false : false) : null
      attach = disk.value.attach == true && disk.value.number == 1 ? disk.value.attach : null
      path = disk.value.attach == true && disk.value.number == 1 ? disk.value.path : null
      datastore_id = disk.value.attach == true && disk.value.number == 1 ? data.vsphere_datastore.datastores[count.index].id : null
    }
  }

  clone {
    template_uuid = data.vsphere_virtual_machine.template[count.index].id

    customize {
      linux_options {
        host_name = var.vms[count.index].general.host_name
        domain = var.vms[count.index].general.domain
      }

      dynamic "network_interface" {
        for_each = [for i in var.vms[count.index].network_interfaces: {
          ipv4_address = i.ipv4_address
          ipv4_netmask = i.ipv4_netmask
        }]
        content {
          ipv4_address = network_interface.value.ipv4_address
          ipv4_netmask = network_interface.value.ipv4_netmask
        }
      }

      dns_server_list = var.global_dns_server_list
      ipv4_gateway = var.global_ipv4_gateway
    }
  }
}

Debug Output

https://gist.github.com/thesefer/609fcb7334031c7d9d60699b5c1cf2c8

Panic Output

No response

Expected Behavior

disk0 correctly created as lazy, thin or eagerZeroedThick

Actual Behavior

disk0 created as thin (displayed as eager in plan/apply) disk1 created as eager (displayed as thin in plan/apply)

Steps to Reproduce

-

Environment Details

No response

Screenshots

No response

References

No response

github-actions[bot] commented 2 months ago

Hello, thesefer! πŸ–

Thank you for submitting an issue for this provider. The issue will now enter into the issue lifecycle.

If you want to contribute to this project, please review the contributing guidelines and information on submitting pull requests.

thesefer commented 2 months ago

Found #2178 being a duplicate of #2116 but as per their description it looks like my problem extends the underlying issue.

tenthirtyam commented 2 weeks ago

Please verify with the latest - v2.8.2 - and let us know since the reported version is rather old.

thesefer commented 1 week ago

Hi, I've been a bit constrained last week.

I just ran the same config using v2.8.2:

thick provision:

eagerZeroedThick provision:

At least I can still destroy the resource without an error but consecutive will fail because the state does not match the disks.