bpg / terraform-provider-proxmox

Terraform Provider for Proxmox
https://registry.terraform.io/providers/bpg/proxmox
Mozilla Public License 2.0
885 stars 140 forks source link

Idempotence issues in "proxmox_virtual_environment_vm" after cloning from another VM #1299

Open hamannju opened 6 months ago

hamannju commented 6 months ago

Hello. So I have another idempotency issue. I create virtual machines by cloning another VM in order to be able to preconfigure a lot of common settings. This works mostly. But if I run terraform plan after applying my configuration it proposes to essentially delete a lot of configuration where blocks are involved. Interestingly this does not apply to disks.

This is the resource definition:

resource "proxmox_virtual_environment_vm" "worker_node" {
  count     = var.worker_node_count
  name      = "k8s-${var.environment_name}-worker-${count.index + 1}"
  pool_id   = var.pool_name
  node_name = var.pve_node_name
  cpu {
    cores = var.vm_config["worker_node"].cores
    type  = var.vm_config["worker_node"].cpu_type
  }
  clone {
    vm_id = var.vm_config["worker_node"].template_id
  }
  memory {
    dedicated = var.vm_config["worker_node"].memory
  }
  stop_on_destroy = true
  depends_on = [proxmox_virtual_environment_pool.k8s_pool]
}

And this is the corresponding output from terraform plan:

module.k8s_dev.proxmox_virtual_environment_vm.worker_node[1] will be updated in-place
  ~ resource "proxmox_virtual_environment_vm" "worker_node" {
        id                      = "128"
      ~ ipv4_addresses          = [
          - [
              - "127.0.0.1",
            ],
          - [],
          - [],
          - [],
          - [],
          - [],
          - [],
          - [
              - "10.42.101.226",
            ],
          - [],
        ] -> (known after apply)
      ~ ipv6_addresses          = [
          - [
              - "::1",
            ],
          - [],
          - [],
          - [],
          - [],
          - [],
          - [],
          - [
              - "fe80::be24:11ff:fed2:ebc6",
            ],
          - [
              - "fe80::be24:11ff:fe9b:e1eb",
            ],
        ] -> (known after apply)
        name                    = "k8s-dev-worker-2"
      ~ network_interface_names = [
          - "lo",
          - "bond0",
          - "dummy0",
          - "teql0",
          - "tunl0",
          - "sit0",
          - "ip6tnl0",
          - "enxbc2411d2ebc6",
          - "enxbc24119be1eb",
        ] -> (known after apply)
        # (24 unchanged attributes hidden)

      - network_device {
          - bridge       = "vmbr3" -> null
          - disconnected = false -> null
          - enabled      = true -> null
          - firewall     = true -> null
          - mac_address  = "BC:24:11:D2:EB:C6" -> null
          - model        = "virtio" -> null
          - mtu          = 0 -> null
          - queues       = 0 -> null
          - rate_limit   = 0 -> null
          - vlan_id      = 0 -> null
            # (1 unchanged attribute hidden)
        }
      - network_device {
          - bridge       = "vmbr5" -> null
          - disconnected = false -> null
          - enabled      = true -> null
          - firewall     = true -> null
          - mac_address  = "BC:24:11:9B:E1:EB" -> null
          - model        = "virtio" -> null
          - mtu          = 1 -> null
          - queues       = 0 -> null
          - rate_limit   = 0 -> null
          - vlan_id      = 0 -> null
            # (1 unchanged attribute hidden)
        }

      - vga {
          - enabled = true -> null
          - memory  = 0 -> null
          - type    = "qxl" -> null
        }

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 5 to change, 0 to destroy.

I assume that this is an issue that falls under the umbrella of the clone VM issues which you seem to be addressing elsewhere. I only found it interesting that terraform is not trying to delete the disks while proposing to delete essentially all networking configuration and the graphics configuration.

bpg commented 6 months ago

Indeed, this is the same common problem with clone. The VM's resources are copied and stored in the provider state, but are missing from the plan. So, on the next apply TF is trying to reconcile the plan with the state, sees that state has lots of "extra" bits, and then trying to remove them. #1231 will solve this, but it's still quite far from completion.

hamannju commented 6 months ago

I contributed some coffees to keep you going.

bpg commented 6 months ago

Thanks a lot!❤️

bpg-autobot[bot] commented 1 week ago

Marking this issue as stale due to inactivity in the past 180 days. This helps us focus on the active issues. If this issue is reproducible with the latest version of the provider, please comment. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!