Open jpbuecken opened 2 months ago
hi @jpbuecken, as workaround you can use:
resource "vcd_vm" "instance" {
name = "upgrade-test"
computer_name = "upgrade-test"
vapp_template_id = "urn:vcloud:vapptemplate:d9d38664-be73-4f4c-8da6-42ef78472fb9"
cpus = 2
cpu_cores = 1
memory = 2084
network {
type = "org"
name = "xxxxxxx"
adapter_type = "vmxnet3"
ip_allocation_mode = "POOL"
is_primary = true
}
lifecycle {
ignore_changes = [
# if you don't want VMs to be updated when consolidate_disks_on_create changes (parameter added in 3.12.0)
consolidate_disks_on_create
]
}
}
We experienced a similar issue after v3.12.0 released at March. To avoid unwanted destroys, we limited vcd provider version as follows:
# old
vcd = {
source = "vmware/vcd"
version = ">= 3.9.0, < 4.0.0"
}
# new
vcd = {
source = "vmware/vcd"
version = ">= 3.9.0, < 3.12.0"
}
But now, our infrastructure team needs to upgrade vcloud director version to 10.6 and current vcd provider version(v3.11.0) is incompatible with 10.6. So we need to revert this version limit.
However, we hesitate to face unwanted destroys. Besides using ignore_changes as a workaround, as suggested by @carmine73, are there other way to prevent this? Is it expected that upgrades affect VMs previously provisioned with older versions?
You can see the output of related plan:
# module.cluster-01.module.couchbase-data.module.servers[3].vcd_vapp_vm.server must be replaced
-/+ resource "vcd_vapp_vm" "server" {
~ computer_name = "couchbase-72235-base-531-001" -> (known after apply)
+ consolidate_disks_on_create = false # forces replacement
+ cpu_limit = (known after apply)
+ cpu_priority = (known after apply)
hi @jpbuecken, as workaround you can use: ... lifecycle { ignore_changes = [
if you don't want VMs to be updated when consolidate_disks_on_create changes (parameter added in 3.12.0)
consolidate_disks_on_create ]
} ...
Hello, it seems I forgot to respond to the suggested workaround. Similar to the previous comment we use a module.
So this issue affects all current VMs. As you may know, modules does not have a "prevent_destroy" feature yet due to terraform limitations. Thus, a big accident can happen and you destroy one or more VMs.
We implemented the mentioned workaround. Since it is a module it affects all current and future VMs. So we cannot simply change the value if needed. In this case it is not that important, since it is unnecessary after the VM exists anyway.
But afaik there should be methods that the provider can handle new values silently without such hassle. For example the azure provider introduced a field disk_controller_type
. And it does not affect a terraform plan, no changes are shown.
Would be great if future vcd provider updates could be smooth as well.
Thank you for sharing @carmine73 and @jpbuecken, This is noted and will try to take it into consideration @karakayamustafa
Terraform Version
Run
terraform -v
to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.Affected Resource(s)
Please list the resources as a list, for example:
Terraform Configuration Files
Expected Behavior
A provider update from 3.11.0 to 3.12.1 should not forces replacement of vcd_vm or vcd_vapp_vm.
Actual Behavior
After update of the provider AND you try to change a value of the vcd_vm (e.g. increase the number of cpus), this forces a replacement of the VM.
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform init
terraform apply
sed -i s/3.11.0/3.12.1/g main.tf
terraform init -upgrade
terraform -v
sed -i s/"cpus = 2"/"cpus = 3"/g main.tf
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: -/+ destroy and then create replacement
Terraform will perform the following actions:
vcd_vm.instance must be replaced
-/+ resource "vcd_vm" "instance" {
Important Factoids
VCD Version 10.6.0.1