vmware / terraform-provider-vcf

Terraform Provider for VMware Cloud Foundation
https://registry.terraform.io/providers/vmware/vcf/
Mozilla Public License 2.0
23 stars 10 forks source link

Distributed virtual switch v8.0.0 is not accepted #95

Closed ZsoltFejes closed 10 months ago

ZsoltFejes commented 11 months ago

Code of Conduct

Terraform

v1.5.2

Terraform Provider

v0.6.0

VMware Cloud Foundation

v5.1.0

Description

Hi,

I am working on deploying VCF 5.1 using the vcf_instance resource when I came across an error saying:

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: expected dv_switch_version to be one of ["7.0.0" "7.0.2" "7.0.3"], got 8.0.0
│ 
│   with module.tm-vcf.vcf_instance.sddc,
│   on ../../../modules/vcf-mgmt/vcf_instance.tf line 3, in resource "vcf_instance" "sddc":
│    3:   dv_switch_version              = var.domain.dv_switch_version

I checked the deployment we have where we used the excel sheet to deploy and the distributed switches are on version 8.0.0.

Can 8.0.0 version be added as an accepted input?

Affected Resources or Data Sources

vcf_instance

Terraform Configuration

resource "vcf_instance" "sddc" {
  instance_id                    = var.domain.mgmt_domain_name
  dv_switch_version              = var.domain.dv_switch_version
  skip_esx_thumbprint_validation = true
  management_pool_name           = var.domain.np01_name
  ceip_enabled                   = true
  esx_license                    = var.esx_license_key
  task_name                      = "workflowconfig/workflowspec-ems.json"
  sddc_manager {
    ip_address = var.domain.sddc_manager_ip
    hostname   = var.domain.sddc_manager_host
    root_user_credentials {
      username = "root"
      password = var.sddc_manager_password
    }
    second_user_credentials {
      username = "vcf"
      password = var.sddc_manager_password
    }
  }
  ntp_servers = [
    var.domain.dns_server1,
    var.domain.dns_server2,
  ]
  dns {
    domain                = var.domain.dns_zone_name
    name_server           = var.domain.dns_server1
    secondary_name_server = var.domain.dns_server2
  }
  network {
    subnet         = var.domain.np01_mgmt_subnet
    vlan_id        = var.domain.np01_mgmt_vlanid
    mtu            = var.domain.np01_mgmt_mtu
    network_type   = "MANAGEMENT"
    gateway        = var.domain.np01_mgmt_gateway
    port_group_key = var.domain.np01_mgmt_port_group
  }
  network {
    subnet = var.domain.np01_vsan_subnet
    include_ip_address_ranges {
      start_ip_address = var.domain.np01_vsan_pool_start
      end_ip_address   = var.domain.np01_vsan_pool_end
    }
    vlan_id        = var.domain.np01_vsan_vlanid
    mtu            = var.domain.np01_vsan_mtu
    network_type   = "VSAN"
    gateway        = var.domain.np01_vsan_gateway
    port_group_key = var.domain.np01_vsan_port_group
  }
  network {
    subnet = var.domain.np01_vmotion_subnet
    include_ip_address_ranges {
      start_ip_address = var.domain.np01_vmotion_pool_start
      end_ip_address   = var.domain.np01_vmotion_pool_end
    }
    vlan_id        = var.domain.np01_vmotion_vlanid
    mtu            = var.domain.np01_vmotion_mtu
    network_type   = "VMOTION"
    gateway        = var.domain.np01_vmotion_gateway
    port_group_key = var.domain.np01_vmotion_port_group
  }
  nsx {
    nsx_manager_size  = var.domain.nsxt_form_factor
    transport_vlan_id = var.domain.tep01_vlan_id
    nsx_manager {
      hostname = var.domain.nsxt_a_dns_name
      ip       = var.domain.nsxt_a_ip_address
    }
    nsx_manager {
      hostname = var.domain.nsxt_b_dns_name
      ip       = var.domain.nsxt_b_ip_address
    }
    nsx_manager {
      hostname = var.domain.nsxt_c_dns_name
      ip       = var.domain.nsxt_c_ip_address
    }
    root_nsx_manager_password = var.nsx_manager_admin_password
    nsx_admin_password        = var.nsx_manager_admin_password
    nsx_audit_password        = var.nsx_manager_admin_password
    overlay_transport_zone {
      zone_name    = "overlay-tz"
      network_name = var.domain.tep01_name
    }
    vip      = var.domain.nsxt_vip_address
    vip_fqdn = var.domain.nsxt_vip_fqdn
    license  = var.nsx_license_key
    ip_address_pool {
      name = var.domain.tep01_name
      subnet {
        gateway = var.domain.tep01_gateway
        cidr    = var.domain.tep01_cidr
        ip_address_pool_range {
          start = var.domain.tep01_start
          end   = var.domain.tep01_end
        }
      }
    }
  }
  vsan {
    license        = var.vsan_license_key
    datastore_name = var.domain.vsan_name
  }
  dvs {
    mtu             = var.domain.vds01_mtu
    is_used_by_nsxt = true
    dvs_name        = var.domain.vds01_name
    vmnics          = var.domain.vds01_vmnics
    networks        = var.domain.vds01_networks
  }
  dvs {
    mtu      = var.domain.vds02_mtu
    dvs_name = var.domain.vds02_name
    vmnics   = var.domain.vds02_vmnics
    networks = var.domain.vds02_networks
  }
  cluster {
    cluster_name     = var.domain.cluster_name
    cluster_evc_mode = var.domain.evc_mode
  }
  psc {
    psc_sso_domain          = "vsphere.local"
    admin_user_sso_password = var.esx_password
  }
  vcenter {
    vcenter_ip            = var.domain.vcenter_ip_address
    vcenter_hostname      = var.domain.vcenter_dns_name
    license               = var.vcenter_license_key
    root_vcenter_password = var.vcenter_root_password
    vm_size               = var.domain.vcenter_vm_size
    storage_size          = var.domain.vcenter_storage_size
  }
  host {
    credentials {
      username = "root"
      password = var.esx_password
    }
    ip_address_private {
      cidr       = var.domain.np01_mgmt_subnet
      ip_address = var.domain.hosts[0].ip
      gateway    = var.domain.np01_mgmt_gateway
    }
    hostname    = var.domain.hosts[0].fqdn
    vswitch     = "vSwitch0"
    association = var.domain.datacenter_name
  }
  host {
    credentials {
      username = "root"
      password = var.esx_password
    }
    ip_address_private {
      cidr       = var.domain.np01_mgmt_subnet
      ip_address = var.domain.hosts[1].ip
      gateway    = var.domain.np01_mgmt_gateway
    }
    hostname    = var.domain.hosts[1].fqdn
    vswitch     = "vSwitch0"
    association = var.domain.datacenter_name
  }
  host {
    credentials {
      username = "root"
      password = var.esx_password
    }
    ip_address_private {
      cidr       = var.domain.np01_mgmt_subnet
      ip_address = var.domain.hosts[2].ip
      gateway    = var.domain.np01_mgmt_gateway
    }
    hostname    = var.domain.hosts[2].fqdn
    vswitch     = "vSwitch0"
    association = var.domain.datacenter_name
  }
  host {
    credentials {
      username = "root"
      password = var.esx_password
    }
    ip_address_private {
      cidr       = var.domain.np01_mgmt_subnet
      ip_address = var.domain.hosts[3].ip
      gateway    = var.domain.np01_mgmt_gateway
    }
    hostname    = var.domain.hosts[3].fqdn
    vswitch     = "vSwitch0"
    association = var.domain.datacenter_name
  }
}

Debug Output

terraform plan -var-file secrets.tfvars                                                                                                                                                                                                                                      
Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: expected dv_switch_version to be one of ["7.0.0" "7.0.2" "7.0.3"], got 8.0.0
│ 
│   with module.tm-vcf.vcf_instance.sddc,
│   on ../../../modules/vcf-mgmt/vcf_instance.tf line 3, in resource "vcf_instance" "sddc":
│    3:   dv_switch_version              = var.domain.dv_switch_version
│ 
╵

Panic Output

No response

Expected Behavior

v8.0.0 should be accepted as it is a version as per the documentation https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-networking/GUID-330A0689-574A-4589-9462-14CA03F3F2F4.html

Actual Behavior

Terraform reject the value saying version 8.0.0 is not an accepted version.

Steps to Reproduce

Set dv_switch_version to "8.0.0" on vcf_instance resource.

Environment Details

No response

Screenshots

No response

References

No response

dimitarproynov commented 11 months ago

v8.0.0 has been added as an acceptable input in the bringup scenario, however the provider has been designed to support VCF 4.5.2, so there are no guarantees that bringup on VCF 5.X.X will succeed.

tenthirtyam commented 10 months ago

Added in https://github.com/vmware/terraform-provider-vcf/pull/103.

github-actions[bot] commented 8 months ago

I'm going to lock this issue because it has been closed for 30 days. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.