Closed lenaxia closed 5 months ago
Having the same issue, found this issue after some googling.
terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "2.9.11"
}
}
}
provider "proxmox" {
pm_api_url = "https://mediaserver:8006/api2/json"
pm_user = "terraform-prov@pve"
pm_password = "MySuperSecretPassword123!"
}
# Media server VM
resource "proxmox_vm_qemu" "mediaserver-tf" {
# The name of the VM
name = "hogsmeade-tf"
# Node to deploy the VM on
target_node = "hogsmeade"
# Template name to clone this VM from
clone = "fedora-template"
full_clone = true
# VM boot policy
oncreate = true
onboot = true
}
This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs
Still relevant.
I'm running into this as well. Not sure if this is is specific to cloning, but I've experiencing it when cloning VMs.
Very minimal example:
resource "proxmox_vm_qemu" "docker1" {
provider = proxmox.hera
name = "docker1"
target_node = "hera"
clone = "debian-11.7-2023-05-13-15-00-28"
memory = 512
agent = 1
cicustom = "user=local:snippets/user_data_docker_vm.yml"
lifecycle {
create_before_destroy = true
}
}
And the diff (when I've changed nothing):
# proxmox_vm_qemu.docker1 will be updated in-place
~ resource "proxmox_vm_qemu" "docker1" {
- agent = 1 -> null
- desc = "Debian 11.7 base template. Generated at 2023-05-13T15:00:28Z" -> null
id = "hera/qemu/101"
name = "docker1"
- qemu_os = "l26" -> null
# (29 unchanged attributes hidden)
- network {
- bridge = "vmbr0" -> null
- firewall = false -> null
- link_down = false -> null
- macaddr = "AE:08:FC:F6:9E:52" -> null
- model = "virtio" -> null
- mtu = 0 -> null
- queues = 0 -> null
- rate = 0 -> null
- tag = -1 -> null
}
}
I'm also experiencing the same issue.
To reproduce:
full_clone
VM using proxmox terraform provider and cicustom
resource "proxmox_vm_qemu" "cloudinit-test" {
name = "vm4"
desc = "Test machine"
target_node = "pm6"
full_clone = true
clone = "ubuntu-jammy-template"
onboot = true
agent = 1
pool = "vms"
os_type = "cloud-init"
cores = 4
sockets = 1
cpu = "host"
memory = 2048
scsihw = "virtio-scsi-pci"
disk {
type = "scsi"
storage = "local-lvm"
size = "32G"
}
network {
model = "virtio"
bridge = "vmbr0"
tag = 100
}
cicustom = "user=local:snippets/user_data.yml,network=local:snippets/net100.yml"
}
terraform apply
again without changing anything and it will show following diff: # proxmox_vm_qemu.cloudinit-test-4 will be updated in-place
~ resource "proxmox_vm_qemu" "cloudinit-test" {
- ciuser = "ubuntu" -> null
id = "pm6/qemu/100"
- ipconfig0 = "ip=dhcp" -> null
name = "vm4"
- qemu_os = "other" -> null
# (37 unchanged attributes hidden)
# (2 unchanged blocks hidden)
}
@chrisbenincasa Adding ignore_changes
helps in this case:
lifecycle {
ignore_changes = [
ipconfig0, qemu_os, ciuser
]
}
This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs
I've completely migrated away from proxmox because this has rendered proxmox entirely unusable for me.
Hi @lenaxia
Out of curiosity, what is your setup looks like now? ESXi? vCenter?
Thanks
I had this same issue. I used the plan to add a bunch of stuff to my resource. Eventually the only thing that wanted to change was qemu_os
. Even if I set it to the value it was trying to change it to it would still say it was a change and it would fail because of permissions. Adding the lifecycle ignore_changes for qemu_os fixed that. Why is qemu_os always trying to change and what permission would it need that Administrator isn't enough?
This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs
Still relevant
This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs
Still relevant.
Hi @lenaxia
Out of curiosity, what is your setup looks like now? ESXi? vCenter?
Thanks
Sorry thought I had responded. I am running bare metal now. I was using proxmox for my k3s cluster and just moved it to bare metal. I have a single proxmox node now that I run for non essential vms that I manually create. I no longer use terraform for any proxmox operations.
This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs
This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs
This issue was closed because it has been inactive for 5 days since being marked as stale.
Still relevant
I was previously on 2.7.4 and didn't see this issue, but since moving v2.9.11 I've seen this, and going back to 2.7.4 now sees this issue too.
This is the config for one of my VMs:
Off of a fresh creation, if I just run
terraform plan
, and if I run aterraform apply --refresh-only
, I immediately get this:The things that jump out at me are that these properties are not set by me in my main.tf, and yet they are forcing a replacement:
I've tried setting
disk_gb
todisk_gb = null
in main.tf, however it doesn't change anything, and trying to set it to0
conflicts withdisk.size
so that's not possible. And I don't think settingmtu = 0
permanently is the right move.I've also tried setting the
lifecycle
to ignore thse changes, and it doesn't have an effect either, it still wants to replace:Running
terraform apply --refresh-only
multiple times will reportNo changes. Your infrastructure still matches the configuration.
, but runningplan
immediately tries to replace.Looking for help so I can make in-place changes to my VMs. It's not reasonable for me to recreate every time.