nutanix / terraform-provider-nutanix

Terraform Nutanix Provider
https://www.terraform.io/docs/providers/nutanix/
Mozilla Public License 2.0
101 stars 112 forks source link

Disks of virtual machine getting shuffled during plan #690

Open fad3t opened 2 months ago

fad3t commented 2 months ago

Nutanix Cluster Information

Prism Central pc.2024.1.0.2 Cluster 6.5.4.5 LTS

Terraform Version

Terraform v1.5.7

Affected Resource(s)

Description

We have an existing VM that has been deployed using Terraform. When doing a plan, Terraform reports changes on disks. It looks like it's changing the disk order, which shouldn't be happening. Is there a clean way to manage VMs with multiple disks?

Terraform Configuration Files

This is a subset, as the nutanix_virtual_machine is embedded into a custom module.

Module code:

resource "nutanix_virtual_machine" "this" {
  name                 = var.name
  description          = var.description
  cluster_uuid         = data.nutanix_cluster.this.id
  num_vcpus_per_socket = var.core_vcpu
  num_sockets          = var.vcpu
  memory_size_mib      = var.memory_gb * 1024

  nic_list {
    # omitted for brevity
  }

  disk_list {
    disk_size_mib = var.disk_size_gb * 1024
    data_source_reference = {
      kind = "image"
      uuid = data.nutanix_image.this.id
    }
    device_properties {
      device_type = "DISK"
      disk_address = {
        device_index = 0
        adapter_type = "SCSI"
      }
    }
  }

  disk_list {
    data_source_reference = {}
    device_properties {
      device_type = "CDROM"
      disk_address = {
        device_index = "1"
        adapter_type = "SATA"
      }
    }
  }

  dynamic "disk_list" {
    for_each = var.additional_disk_compress_size_gb_list
    content {
      disk_size_mib = disk_list.value * 1024
      storage_config {
        storage_container_reference {
          kind = "storage_container"
          uuid = local.storage_compress_id[var.cluster_name]
        }
      }
    }
  }

  dynamic "disk_list" {
    for_each = var.additional_disk_fullperf_size_gb_list
    content {
      disk_size_mib = disk_list.value * 1024
      storage_config {
        storage_container_reference {
          kind = "storage_container"
          uuid = local.storage_fullperf_id[var.cluster_name]
        }
      }
    }
  }

  dynamic "categories" {
    for_each = local.category_all
    content {
      name  = categories.key
      value = categories.value
    }
  }

  nutanix_guest_tools = {
    state           = "ENABLED",
    iso_mount_state = "MOUNTED"
  }

  ngt_enabled_capability_list = [
    "SELF_SERVICE_RESTORE"
  ]

  lifecycle {
    create_before_destroy = true
    ignore_changes = [
      owner_reference,
      project_reference,
      guest_customization_cloud_init_user_data,
      categories,
      nutanix_guest_tools,
      disk_list.0.data_source_reference,
      ngt_enabled_capability_list
    ]
  }
}

VM instance:

module "testvm" {
  source  = "custom.registry/nutanix"
  version = "2.2.7"

  name         = "my-test-vm"
  description  = "Test VM"
  vcpu         = 10
  memory_gb    = 64
  disk_size_gb = 70

  additional_disk_fullperf_size_gb_list = [4000, 550]
  subnet_name                           = "DEV"

  category = {
    env = "DEV"
  }
}

Debug Output

  ~ resource "nutanix_virtual_machine" "this" {
        id                                               = "ba0d011a-426a-49de-b27c-0502bf301681"
        name                                             = "my-test-vm"
        # (38 unchanged attributes hidden)

      ~ disk_list {
          ~ disk_size_mib          = 0 -> 71680
            # (4 unchanged attributes hidden)

          ~ device_properties {
              ~ device_type  = "CDROM" -> "DISK"
              ~ disk_address = {
                  ~ "adapter_type" = "SATA" -> "SCSI"
                  ~ "device_index" = "1" -> "0"
                }
            }
        }
      ~ disk_list {
          ~ disk_size_mib          = 71680 -> 4096000
            # (4 unchanged attributes hidden)

          ~ storage_config {
              ~ storage_container_reference {
                    name = "SelfServiceContainer"
                  ~ uuid = "ea8f53b4-8f5b-48e4-be36-2f73cb3e80bc" -> "b4e820d5-5cd6-47f4-a95f-912627d1b01a"
                    # (1 unchanged attribute hidden)
                }
            }

            # (1 unchanged block hidden)
        }
      ~ disk_list {
          ~ disk_size_mib          = 4096000 -> 563200
            # (4 unchanged attributes hidden)

            # (2 unchanged blocks hidden)
        }
      ~ disk_list {
            # (5 unchanged attributes hidden)

          ~ device_properties {
              ~ device_type  = "DISK" -> "CDROM"
              ~ disk_address = {
                  ~ "adapter_type" = "SCSI" -> "SATA"
                  ~ "device_index" = "2" -> "1"
                }
            }

            # (1 unchanged block hidden)
        }

        # (8 unchanged blocks hidden)
    }

Expected Behavior

No change.

Actual Behavior

The plan reports changes in disk layout. All disks seem to be mixed up.

davhdavh commented 3 weeks ago

It is because it always implicitly adds a cdrom drive at index 0. It would be better if that was handled in the api call, rather than the tf config so that it becomes part of the state. Or atleast, if you could actually use ignore_changes on the disk_list...

  lifecycle {
    ignore_changes = [
      guest_customization_cloud_init_user_data,
      disk_list,
      disk_list[0],
      disk_list[1],

STILL gives:

~ update in-place
Terraform will perform the following actions:
# nutanix_virtual_machine.windows[2] will be updated in-place
~ resource "nutanix_virtual_machine" "windows" {
id = "blabla"
name = "k8swindows3"
# (35 unchanged attributes hidden)
- disk_list {
- data_source_reference = {} -> null
- disk_size_bytes = 389120 -> null
- disk_size_mib = 1 -> null
- uuid = "xxx" -> null
- volume_group_reference = {} -> null
- device_properties {
- device_type = "CDROM" -> null
- disk_address = {
- "adapter_type" = "IDE"
- "device_index" = "0"
} -> null
}
- storage_config {
- storage_container_reference {
- kind = "storage_container" -> null
- name = "SelfServiceContainer" -> null
- uuid = "xxx" -> null
}
}
}