vmware / terraform-provider-vcd

Terraform VMware Cloud Director provider
https://www.terraform.io/docs/providers/vcd/
Mozilla Public License 2.0
151 stars 112 forks source link

Upgrade path from 3.11.0 or below to 3.12 or above not smooth due to force replacement by consolidate_disks_on_create #1328

Open jpbuecken opened 2 months ago

jpbuecken commented 2 months ago

Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.

Terraform v1.9.6
on linux_amd64
+ provider registry.terraform.io/vmware/vcd v3.11.0

Affected Resource(s)

Please list the resources as a list, for example:

Terraform Configuration Files

terraform {
  required_providers {
    vcd = {
      source  = "vmware/vcd"
      version = "3.11.0"
    }
  }
  required_version = "1.9.6"
}

resource "vcd_vm" "instance" {
  name             = "upgrade-test"
  computer_name    = "upgrade-test"
  vapp_template_id = "urn:vcloud:vapptemplate:d9d38664-be73-4f4c-8da6-42ef78472fb9"
  cpus             = 2
  cpu_cores        = 1
  memory           = 2084

  network {
    type               = "org"
    name               = "xxxxxxx"
    adapter_type       = "vmxnet3"
    ip_allocation_mode = "POOL"
    is_primary         = true
  }
}

Expected Behavior

A provider update from 3.11.0 to 3.12.1 should not forces replacement of vcd_vm or vcd_vapp_vm.

Actual Behavior

After update of the provider AND you try to change a value of the vcd_vm (e.g. increase the number of cpus), this forces a replacement of the VM.

  # vcd_vm.instance must be replaced
-/+ resource "vcd_vm" "instance" {
      + consolidate_disks_on_create    = false # forces replacement

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Start with above HCL. Put the code above in a file main.tf
  2. terraform init
  3. terraform apply
  4. Bump provider version in main.tf, e.g. sed -i s/3.11.0/3.12.1/g main.tf
  5. terraform init -upgrade
  6. Check: terraform -v
Terraform v1.9.6
on linux_amd64
+ provider registry.terraform.io/vmware/vcd v3.12.1
  1. Increase cpus. E.g. sed -i s/"cpus = 2"/"cpus = 3"/g main.tf
  2. terraform apply You will see the replacement
    
    vcd_vm.instance: Refreshing state... [id=urn:vcloud:vm:b1762165-5d94-4b26-9536-5bb0fa7da624]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: -/+ destroy and then create replacement

Terraform will perform the following actions:

vcd_vm.instance must be replaced

-/+ resource "vcd_vm" "instance" {

Important Factoids

VCD Version 10.6.0.1

carmine73 commented 2 months ago

hi @jpbuecken, as workaround you can use:

resource "vcd_vm" "instance" {
  name             = "upgrade-test"
  computer_name    = "upgrade-test"
  vapp_template_id = "urn:vcloud:vapptemplate:d9d38664-be73-4f4c-8da6-42ef78472fb9"
  cpus             = 2
  cpu_cores        = 1
  memory           = 2084

  network {
    type               = "org"
    name               = "xxxxxxx"
    adapter_type       = "vmxnet3"
    ip_allocation_mode = "POOL"
    is_primary         = true
  }

  lifecycle {
    ignore_changes = [
      # if you don't want VMs to be updated when consolidate_disks_on_create changes (parameter added in 3.12.0)
      consolidate_disks_on_create
    ]
  }
}
karakayamustafa commented 2 weeks ago

We experienced a similar issue after v3.12.0 released at March. To avoid unwanted destroys, we limited vcd provider version as follows:

    # old
    vcd = {
      source  = "vmware/vcd"
      version = ">= 3.9.0, < 4.0.0"
    }

    # new
    vcd = {
      source  = "vmware/vcd"
      version = ">= 3.9.0, < 3.12.0"
    }

But now, our infrastructure team needs to upgrade vcloud director version to 10.6 and current vcd provider version(v3.11.0) is incompatible with 10.6. So we need to revert this version limit.

However, we hesitate to face unwanted destroys. Besides using ignore_changes as a workaround, as suggested by @carmine73, are there other way to prevent this? Is it expected that upgrades affect VMs previously provisioned with older versions?

You can see the output of related plan:


# module.cluster-01.module.couchbase-data.module.servers[3].vcd_vapp_vm.server must be replaced
-/+ resource "vcd_vapp_vm" "server" {
   ~ computer_name                  = "couchbase-72235-base-531-001" -> (known after apply)
   + consolidate_disks_on_create    = false # forces replacement
   + cpu_limit                      = (known after apply)
   + cpu_priority                   = (known after apply)
jpbuecken commented 2 weeks ago

hi @jpbuecken, as workaround you can use: ... lifecycle { ignore_changes = [

if you don't want VMs to be updated when consolidate_disks_on_create changes (parameter added in 3.12.0)

  consolidate_disks_on_create
]

} ...

Hello, it seems I forgot to respond to the suggested workaround. Similar to the previous comment we use a module.

So this issue affects all current VMs. As you may know, modules does not have a "prevent_destroy" feature yet due to terraform limitations. Thus, a big accident can happen and you destroy one or more VMs.

We implemented the mentioned workaround. Since it is a module it affects all current and future VMs. So we cannot simply change the value if needed. In this case it is not that important, since it is unnecessary after the VM exists anyway.

But afaik there should be methods that the provider can handle new values silently without such hassle. For example the azure provider introduced a field disk_controller_type. And it does not affect a terraform plan, no changes are shown.

Would be great if future vcd provider updates could be smooth as well.

Didainius commented 2 weeks ago

Thank you for sharing @carmine73 and @jpbuecken, This is noted and will try to take it into consideration @karakayamustafa