hashicorp / terraform-provider-vsphere

Terraform Provider for VMware vSphere
https://registry.terraform.io/providers/hashicorp/vsphere/
Mozilla Public License 2.0
622 stars 452 forks source link

Unable to clone template with multiple SCSI controllers and disks #1476

Open sachittiwari opened 3 years ago

sachittiwari commented 3 years ago

Terraform Version

Terraform 0.13.5 and Terraform 1.0.7

vSphere Provider Version

1.12

Affected Resource(s)

vsphere_virtual_machine

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
provider "vsphere" {
  version        = "1.12"
  user           = var.vsphere_user
  password       = var.vsphere_password
  vsphere_server = var.vsphere_server

  # If you have a self-signed cert
  allow_unverified_ssl = true
}

data "vsphere_datacenter" "dc" {
  name = var.datacenter_name
}

data "vsphere_datastore" "datastore" {
  name          = var.datastore_name
  datacenter_id = data.vsphere_datacenter.dc.id
}

data "vsphere_compute_cluster" "compute_cluster" {
  name          = var.cluster_name
  datacenter_id = data.vsphere_datacenter.dc.id
}

data "vsphere_network" "network" {
  name          = var.network_name
  datacenter_id = data.vsphere_datacenter.dc.id
}

data "vsphere_virtual_machine" "template" {
  name          = var.template_name
  datacenter_id = data.vsphere_datacenter.dc.id
}

resource "vsphere_virtual_machine" "vm" {
  name             = var.vm_name
  resource_pool_id = data.vsphere_compute_cluster.compute_cluster.resource_pool_id
  datastore_id     = data.vsphere_datastore.datastore.id

  num_cpus = var.cpus
  memory   = var.mem
  guest_id = var.guest
  annotation = var.notes
 scsi_type = data.vsphere_virtual_machine.template.scsi_type
  scsi_controller_count = "4"
  network_interface {
    network_id = data.vsphere_network.network.id
  }

disk {
    label = "disk1"
    size  = "128"
    unit_number = 15
  }
 disk {
    label = "disk2"
    size  = "128"

   unit_number = 16
  }
 disk {
    label = "disk3"

 size  = "128"
unit_number = 17
  }
 disk {
    label = "disk4"
 size  = "128"
unit_number = 18
}
 disk {
    label = "disk5"
   size  = "128"
unit_number = 30
}
 disk {
    label = "disk6"
   size  = "128"
unit_number = 33
}
 disk {
    label = "disk7"
   size  = "128"
unit_number = 31
}
 disk {
    label = "disk8"
   size  = "128"
unit_number = 32
}
 disk {
    label = "disk9"
   size  = "128"
unit_number = 45
}
 disk {
    label = "disk10"
   size  = "128"
unit_number = 46
}
 disk {
    label = "disk11"
   size  = "128"
unit_number = 47
}
 disk {
    label = "disk12"
   size  = "128"
unit_number = 48
}

  clone {
    template_uuid = data.vsphere_virtual_machine.template.id

    customize {
      linux_options {
        host_name = var.vm_name
        domain    = var.domain_name
      }
      network_interface {
        ipv4_address = var.ipaddress
        ipv4_netmask = var.netmask
      }

      ipv4_gateway    = var.gateway
    }
 }
}
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

Debug Output

https://gist.github.com/sachittiwari/a1142ddfd4a3d03034f9ca14f857c023

Panic Output

Expected Behavior

A new VM must have been created.

Actual Behavior

Error: error reconfiguring virtual machine: error processing disk changes post-clone: disk.2: cannot assign disk: unit number 1 on SCSI bus 1 is in use │ │ with vsphere_virtual_machine.vm, │ on main.tf line 49, in resource "vsphere_virtual_machine" "vm": │ 49: resource "vsphere_virtual_machine" "vm" {

Steps to Reproduce

Create a vSphere template with 4 SCSi Controllers and 13 disks. Disks are: 0:0, 1:0,1:1, 1:2, 1:3 , 2:0, 2:1, 2:2, 2:3, 3:0, 3:1 ,3:2, 3:3 terraform apply

Important Factoids

When I try to clone a VM from the vSphere GUI using the customization spec Linux file, it works. Same thing when trying to reproduce through terraform doesn't work. The template has 13 disks in it.

This issue related to disk sorting and seems to be fixed already but still getting the same error. Please check debug output, sorting does not seem to be working.

References

https://github.com/hashicorp/terraform-provider-vsphere/issues/997

Community Note

sachittiwari commented 3 years ago

This is a RHEL 8.2 VM that we are trying to clone.

tenthirtyam commented 2 years ago

Hi @sachittiwari, do you continue to have the same issue with the latest version of the provider - v2.0.2? You mentioned v1.12 was in use.

Ryan

dandunckelman commented 2 years ago

@tenthirtyam I'm testing on the latest version. If you specify a disk w/ its unit_number set to 16, 17, or 18, then you get the error saying that unit number of SCSI bus 1 is in use.

EDIT

This is the disk layout I'm trying to replicate to a new VM:

image

Tigershark2005-zz commented 2 years ago

While there is a controller_type parameter I don't see any option to set a disk to be on a certain SCSI controller. I'm guessing you are creating 4 SCSI controllers but no disks are being assigned to that second one, maybe that's where the error for SCSI bus 1 is. I'd be curious if you had 4 disks and set controller_type = "scsi" for each if they would be 0:0 through 0:4 or if they would be 0:0 through 4:0 and just be the first and only disk on each SCSI channel. You say in your Steps to Reproduce that you're putting disks on those SCSI controllers but nowhere in the code is that mapping specified.

dandunckelman commented 2 years ago

This is my terraform file's disk section:

Click to show the Terraform disk layout ``` "disk": [ { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.0.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_0.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.0.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.0.thin_provisioned}", "unit_number": "0" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.1.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_1.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.1.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.1.thin_provisioned}", "unit_number": "19" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.2.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_2.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.2.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.2.thin_provisioned}", "unit_number": "20" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.3.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_3.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.3.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.3.thin_provisioned}", "unit_number": "21" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.4.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_4.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.4.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.4.thin_provisioned}", "unit_number": "22" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.5.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_5.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.5.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.5.thin_provisioned}", "unit_number": "30" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.6.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_6.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.6.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.6.thin_provisioned}", "unit_number": "31" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.7.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_7.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.7.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.7.thin_provisioned}", "unit_number": "32" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.8.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_8.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.8.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.8.thin_provisioned}", "unit_number": "33" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.9.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_9.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.9.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.9.thin_provisioned}", "unit_number": "34" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.10.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_10.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.10.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.10.thin_provisioned}", "unit_number": "35" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.11.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_11.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.11.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.11.thin_provisioned}", "unit_number": "36" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.12.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_12.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.12.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.12.thin_provisioned}", "unit_number": "38" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.13.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_13.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.13.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.13.thin_provisioned}", "unit_number": "39" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.14.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_14.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.14.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.14.thin_provisioned}", "unit_number": "40" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.15.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_15.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.15.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.15.thin_provisioned}", "unit_number": "41" }, { "eagerly_scrub": "${data.vsphere_virtual_machine.alletra_template.disks.16.eagerly_scrub}", "label": "HPES-5000026-verify_ovas-alletra_16.vmdk", "size": "${data.vsphere_virtual_machine.alletra_template.disks.16.size}", "thin_provisioned": "${data.vsphere_virtual_machine.alletra_template.disks.16.thin_provisioned}", "unit_number": "42" } ], ```

which successfully cloned and created this disk layout:

image

It's not exactly a matching layout to the VM template.

ghost commented 1 year ago

This is still a problem, the disks despite being in the correct order in the Terraform configuration, get randomly assigned if they go to the same controller.

For example:

disk0 > 0:0 > Hard Disk 1 disk1 > 1:0 > Hard Disk 5 (swapped with Hard Disk 2 because they both go onto the same controller) disk2 > 2:0 > Hard Disk 3 disk3 > 3:0 > Hard disk 4 disk4 > 1:1 > Hard Disk 2