hashicorp / terraform-provider-vsphere

Terraform Provider for VMware vSphere
https://registry.terraform.io/providers/hashicorp/vsphere/
Mozilla Public License 2.0
614 stars 448 forks source link

Cloning VM with 15 disks fails - "Hard disk 15" too high (15) - maximum value is 14 with 1 SCSI controller(s #2086

Open ravatheodor opened 10 months ago

ravatheodor commented 10 months ago

Community Guidelines

Terraform

v1.6.5

Terraform Provider

v2.6.1

VMware vSphere

v7.0.3

Description

In vSphere SCSI X:7 is reserved. A VM with more than 7 disks on a single controller will have disks assigned from 0:0 to 0:6 and from 0:8 to 0:15 (for SCSI controller 0). The maximum of 15 disks per SCSI controller is respected. vCenter Server Appliance has 15 disks on SCSI 0 controller. Terraform plan fails because of checking Hard disk number instead of counting the disks │ Error: disk.14: unit_number on disk "Hard disk 15" too high (15) - maximum value is 14 with 1 SCSI controller(s)

Affected Resources or Data Sources

resource vsphere_virtual_machine

### Terraform Configuration

###
### Variables
####

variable "datacenter" {
  type        = string
  description = "Virtual datacenter name"
}

variable "vsphere_cluster" {
  type        = string
  description = "vSphere cluster name"
}

variable "vsphere_host" {
  type        = string
  description = "vSphere host name"
}

variable "datastore" {
  type        = string
  description = "datastore for the lab VMs"
}

variable "lab_id" {
  type        = string
  description = "unique ID for the current lab"
}

variable "lab_folder" {
  type        = string
  description = "folder where to build"
}

variable "isolated_portgroup" {
  type        = string
  description = "isolated portgroup"
}

variable "lin_vm_map" {
  description = "map of Linux VMs"
  type = list(object({
    template_name = string
    vm_name       = string
    hostname      = string
    num_cpus      = number
    memory        = number
    guest_id      = string
  }))
}

###
### read vSphere infra
####
data "vsphere_datacenter" "datacenter" {
  name = var.datacenter
}

data "vsphere_host" "vsphere_host" {
  name          = var.vsphere_host
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

data "vsphere_datastore" "datastore" {
  name          = var.datastore
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

data "vsphere_network" "isolated_net" {
  name          = var.isolated_portgroup
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

data "vsphere_virtual_machine" "lin_vm_template_list" {
  for_each      = { for vm in var.lin_vm_map : vm.vm_name => vm }
  name          = each.value["template_name"]
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

### clone VM
resource "vsphere_virtual_machine" "lin_vm_from_template" {
  for_each = { for vm in var.lin_vm_map : vm.vm_name => vm }

  clone {
    template_uuid = data.vsphere_virtual_machine.lin_vm_template_list["${each.value["vm_name"]}"].id
  }

  name             = "${each.value["vm_name"]}-${var.lab_id}"
  folder           = data.vsphere_folder.lab_folder.path
  resource_pool_id = data.vsphere_host.vsphere_host.resource_pool_id
  datastore_id     = data.vsphere_datastore.datastore.id
  host_system_id   = data.vsphere_host.vsphere_host.id

  wait_for_guest_net_timeout = 0
  wait_for_guest_ip_timeout  = 0
  guest_id                   = each.value["guest_id"]

  num_cpus = each.value["num_cpus"]
  memory   = each.value["memory"]

  firmware = data.vsphere_virtual_machine.lin_vm_template_list["${each.value["vm_name"]}"].firmware

  enable_disk_uuid = "true"

  network_interface {
    network_id   = data.vsphere_network.isolated_net.id
    adapter_type = data.vsphere_virtual_machine.lin_vm_template_list["${each.value["vm_name"]}"].network_interface_types[0]
  }

  scsi_type = data.vsphere_virtual_machine.lin_vm_template_list["${each.value["vm_name"]}"].scsi_type

  dynamic "disk" {
    for_each = data.vsphere_virtual_machine.lin_vm_template_list["${each.value["vm_name"]}"].disks

    content {
      label            = disk.value.label
      unit_number      = disk.value.unit_number
      size             = disk.value.size
      eagerly_scrub    = disk.value.eagerly_scrub
      thin_provisioned = disk.value.thin_provisioned
    }
  }

  lifecycle {
    ignore_changes = all
  }
}

Debug Output

https://gist.github.com/ravatheodor/e70b7865fbb9e6b21d25c7a38a16e457#file-gistfile1-txt

Panic Output

No response

Expected Behavior

terraform plan successful

Actual Behavior

terraform plan failed

Steps to Reproduce

create an empty VM with 15 disks on SCSI controller 0 create a resource vsphere_virtual_machine cloning the VM using dynamic disk run terraform plan

Environment Details

No response

Screenshots

scsi-0-7-not-used

References

No response

github-actions[bot] commented 10 months ago

Hello, ravatheodor! 🖐

Thank you for submitting an issue for this provider. The issue will now enter into the issue lifecycle.

If you want to contribute to this project, please review the contributing guidelines and information on submitting pull requests.

tenthirtyam commented 10 months ago

Could you please use the Markdown tools to format the example for readability? Thanks!

ravatheodor commented 10 months ago

Hope it helps now. It's basically a clone from template, but done in a convoluted way.

peterbaumert commented 7 months ago

Hi, I am having the same issue now. Is there any news regarding this?

The issue lies in https://github.com/hashicorp/terraform-provider-vsphere/blob/db8347d25cb30d13700b75f36ee23c564c1ec200/vsphere/internal/virtualdevice/virtual_machine_disk_subresource.go#L1623 this line.

Since in the next line it just compares current > max, this line should be adjusted to 16-1

tenthirtyam commented 3 months ago

Working on this one, it's a little more complicated that just changing maxUnit := ctlrCount*15 - 1 to maxUnit := ctlrCount*16 - 1.

I've got a local branch that has changes to account for 0:7 being reserved in the range and throwing an error it unit_number == 7.

I have the code working but during the provision it's erroring on the clone - just need to work past that next.