Telmate / terraform-provider-proxmox

Terraform provider plugin for proxmox
MIT License
2.19k stars 531 forks source link

Breaking BUG 2.9.11 "VM already running" #460

Open Syntax3rror404 opened 2 years ago

Syntax3rror404 commented 2 years ago

Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve.

Enter a value: yes

module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Creating... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Creating... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Creating... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[1]: Creating... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [10s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [10s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[1]: Still creating... [10s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [10s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [20s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [20s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[1]: Still creating... [20s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [20s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [30s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [30s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [30s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[1]: Still creating... [30s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [40s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [40s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[1]: Still creating... [40s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [40s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [50s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [50s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[1]: Still creating... [50s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [50s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [1m0s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [1m0s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [1m0s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[1]: Still creating... [1m0s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [1m10s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [1m10s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [1m10s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [1m20s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [1m20s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [1m20s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [1m30s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [1m30s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [1m30s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Provisioning with 'remote-exec'... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Host: 10.31.103.238 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Connected! module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [1m40s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [1m40s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [1m40s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Provisioning with 'remote-exec'... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Host: 10.31.103.240 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): telmate-test2 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Shutdown scheduled for Wed 2021-12-15 12:24:10 UTC, use 'shutdown -c' to cancel. module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Provisioning with 'remote-exec'... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Host: 10.31.103.241 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Provisioning with 'remote-exec'... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Host: 10.31.103.238 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Connected! module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Connected! module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): telmate-test2-4 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Shutdown scheduled for Wed 2021-12-15 12:24:17 UTC, use 'shutdown -c' to cancel. module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [1m50s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [1m50s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [1m50s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Provisioning with 'remote-exec'... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Host: 10.31.103.240 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Host: 10.31.103.240 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Host: 10.31.103.240 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): telmate-test2-3 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Shutdown scheduled for Wed 2021-12-15 12:24:20 UTC, use 'shutdown -c' to cancel. module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Provisioning with 'remote-exec'... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Host: 10.31.103.241 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Host: 10.31.103.241 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Host: 10.31.103.238 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [2m0s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [2m0s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [2m0s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [2m10s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [2m10s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [2m10s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Host: 10.31.103.241 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Host: 10.31.103.240 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Host: 10.31.103.238 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): Connected! module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [2m20s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Still creating... [2m20s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [2m20s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0] (remote-exec): wait for reboot module.telmate_test2.proxmox_vm_qemu.proxmox_vm[0]: Creation complete after 2m21s [id=sthings-pve1/qemu/132] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Still creating... [2m30s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Still creating... [2m30s elapsed] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Host: 10.31.103.241 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): Connected! module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2] (remote-exec): wait for reboot module.telmate_test2.proxmox_vm_qemu.proxmox_vm[2]: Creation complete after 2m34s [id=sthings-pve1/qemu/135] module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Connecting to remote host via SSH... module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Host: 10.31.103.240 module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): User: awx module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Password: true module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Private key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Certificate: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): SSH Agent: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Checking Host Key: false module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Target Platform: unix module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): Connected! module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3] (remote-exec): wait for reboot module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3]: Creation complete after 2m38s [id=sthings-pve1/qemu/136] ╷ │ Warning: Argument is deprecated │ │ with module.telmate_test2.proxmox_vm_qemu.proxmox_vm[3], │ on .terraform/modules/telmate_test2/vm.tf line 8, in resource "proxmox_vm_qemu" "proxmox_vm": │ 8: clone_wait = 45 │ │ do not use anymore │ │ (and 3 more similar warnings elsewhere) ╵ ╷ │ Error: VM 134 already running │ │ with module.telmate_test2.proxmox_vm_qemu.proxmox_vm[1], │ on .terraform/modules/telmate_test2/vm.tf line 1, in resource "proxmox_vm_qemu" "proxmox_vm": │ 1: resource "proxmox_vm_qemu" "proxmox_vm" { │ ╵

Syntax3rror404 commented 2 years ago

2.8.0 is the latest version that work properly

MarkLFT commented 2 years ago

I randomly receive this also. My config is creating 3 ubuntu VMs using cloud-init. The VM which reports this is different each time, I can see no pattern to it.

racciari commented 2 years ago

Same here using Proxmox 7.2, Terraform v1.3.2 and proxmox plugin v2.9.11

simonoff commented 1 year ago

You need to add Sys.Audit to the user role. Also, for me works to set agent = 1 as my image with qemu agent.

Syntax3rror404 commented 1 year ago

I only have this problem when i create more = then 3 machines at the same time. In my example log there are 4 machines provisioned by terraform

breisig commented 1 year ago

I am getting the SAME error [Example: Error: VM 102 already running] when creating 5 VM's at the same time. This is happening on both Proxmox 7.2 and on the new Proxmox 7.3. This is a pretty big issue,.

terraform version.

Terraform v1.3.5
on linux_amd64
+ provider registry.terraform.io/telmate/proxmox v2.9.11

Update 1: I noticed I had iothreads set to greater than 0

 disk {
    slot = 0
    # set disk size here. leave it small for testing because expanding the disk takes time.
    size = "75G"
    type = "scsi"
    storage = "local-lvm"
    iothread = 1
  }

and I was watching the Proxmox webui console when terraform was provisioning it and noticed a warning message saying to ignore iothread > 0 warning which when that happened, I noticed the "VM already running" error message from terraform after I saw that message.

When I switched iothread back to 0 and destroy + apply

 disk {
    slot = 0
    # set disk size here. leave it small for testing because expanding the disk takes time.
    size = "75G"
    type = "scsi"
    storage = "local-lvm"
    iothread = 0
  }

Terraform runs without any issue. Maybe the plugin doesn't expect to see + ignore a warning ,essage when configuring or starting up the VM's?

Syntax3rror404 commented 1 year ago

I am getting the SAME error [Example: Error: VM 102 already running] when creating 5 VM's at the same time. This is happening on both Proxmox 7.2 and on the new Proxmox 7.3. This is a pretty big issue,.

terraform version.

Terraform v1.3.5
on linux_amd64
+ provider registry.terraform.io/telmate/proxmox v2.9.11

Update 1: I noticed I had iothreads set to greater than 0

 disk {
    slot = 0
    # set disk size here. leave it small for testing because expanding the disk takes time.
    size = "75G"
    type = "scsi"
    storage = "local-lvm"
    iothread = 1
  }

and I was watching the Proxmox webui console when terraform was provisioning it and noticed a warning message saying to ignore iothread > 0 warning which when that happened, I noticed the "VM already running" error message from terraform after I saw that message.

When I switched iothread back to 0 and destroy + apply

 disk {
    slot = 0
    # set disk size here. leave it small for testing because expanding the disk takes time.
    size = "75G"
    type = "scsi"
    storage = "local-lvm"
    iothread = 0
  }

Terraform runs without any issue. Maybe the plugin doesn't expect to see + ignore a warning ,essage when configuring or starting up the VM's?

This is the exact same issue I also have. I don't understand why this bug has existed for many versions. It effectively renders this entire provider unusable. If you build automations, for example with ansible and then terraform builds the machines and a few machines are hanging in the air, that's pretty stupid.

Please please please fix this issue.

@mleone87 Do you have any idea what could be causing this? there are several with the exact same problem which is a pretty serious bug.

breisig commented 1 year ago

@Syntax3rror404 Can you double check when running terraform destroy + apply that when its done cloning and between when the plugin configs + runs the VM's, can you check the Proxmox WebUI log console to see if any warning messages show up?

simonoff commented 1 year ago

I think its cases by iothread set. As After I have removed it it can provision all 6 nodes at once.

breisig commented 1 year ago

I think that warning message that was showing up when terraform is configuring + starting up the VM that it screws up the plugin from seeing/getting the 'true' status and doesn't know what to do with that warning message. If any warning message shows up in the proxmox global console log during those stages, the plugin doesn't know how to handle it. [Needs some extra checks in the code]

altmeista commented 1 year ago

Are there any updates on this error message or do you know a workaround? I'm getting this too and my gitlab pipeline is always failing because of that. But I need the VM to start automatically when it is created. iothread is set to default (0) and if I enable agent I get another error message. Here is an extract from my proxmox log if this helps:

Nov 25 08:42:52 tf-proxmox pvedaemon[168475]: <root@pam!terraform> end task UPID:tf-proxmox:0008366C:01665026:638071CE:qmclone:1003:root@pam!terraform: OK
Nov 25 08:43:28 tf-proxmox pvedaemon[383213]: <root@pam!terraform> update VM 302: -agent 0 -bios ovmf -boot order=scsi0;ide2;net0;ide0;sata0 -cores 4 -cpu host -delete balloon, vcpus, vga -description srv-win2012r2, generated by terraform at 2022-11-25 07:41:38 -hotplug network,disk,usb -kvm 1 -memory 8192 -name srv-win2012r2 -net0 virtio=36:61:8D:5B:BA:E9,bridge=vmbr0 -numa 0 -onboot 0 -scsi0 local-lvm:vm-302-disk-0,size=40G,format=raw -scsihw virtio-scsi-pci -sockets 1 -tablet 1
Nov 25 08:43:28 tf-proxmox pvedaemon[383213]: <root@pam!terraform> starting task UPID:tf-proxmox:0008376B:01667089:63807220:qmconfig:302:root@pam!terraform:
Nov 25 08:43:28 tf-proxmox pvedaemon[538475]: cannot delete 'balloon' - not set in current configuration!
Nov 25 08:43:28 tf-proxmox pvedaemon[538475]: cannot delete 'vcpus' - not set in current configuration!
Nov 25 08:43:28 tf-proxmox pvedaemon[538475]: cannot delete 'vga' - not set in current configuration!
Nov 25 08:43:29 tf-proxmox pvedaemon[383213]: <root@pam!terraform> end task UPID:tf-proxmox:0008376B:01667089:63807220:qmconfig:302:root@pam!terraform: OK
Nov 25 08:43:37 tf-proxmox pvedaemon[168475]: <root@pam!terraform> starting task UPID:tf-proxmox:00083788:016673E5:63807229:qmstart:302:root@pam!terraform:
Nov 25 08:43:37 tf-proxmox pvedaemon[538504]: start VM 302: UPID:tf-proxmox:00083788:016673E5:63807229:qmstart:302:root@pam!terraform:
Nov 25 08:43:37 tf-proxmox systemd[1]: Started 302.scope.
Nov 25 08:43:37 tf-proxmox systemd-udevd[538524]: Using default interface naming scheme 'v247'.
Nov 25 08:43:37 tf-proxmox systemd-udevd[538524]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 25 08:43:38 tf-proxmox kernel: device tap302i0 entered promiscuous mode
Nov 25 08:43:38 tf-proxmox kernel: vmbr0: port 4(tap302i0) entered blocking state
Nov 25 08:43:38 tf-proxmox kernel: vmbr0: port 4(tap302i0) entered disabled state
Nov 25 08:43:38 tf-proxmox kernel: vmbr0: port 4(tap302i0) entered blocking state
Nov 25 08:43:38 tf-proxmox kernel: vmbr0: port 4(tap302i0) entered forwarding state
Nov 25 08:43:38 tf-proxmox pvedaemon[168475]: <root@pam!terraform> end task UPID:tf-proxmox:00083788:016673E5:63807229:qmstart:302:root@pam!terraform: WARNINGS: 1
Nov 25 08:43:41 tf-proxmox pvedaemon[538550]: start VM 302: UPID:tf-proxmox:000837B6:0166757B:6380722D:qmstart:302:root@pam!terraform:
Nov 25 08:43:41 tf-proxmox pvedaemon[382553]: <root@pam!terraform> starting task UPID:tf-proxmox:000837B6:0166757B:6380722D:qmstart:302:root@pam!terraform:
Nov 25 08:43:41 tf-proxmox pvedaemon[538550]: VM 302 already running
Nov 25 08:43:41 tf-proxmox pvedaemon[382553]: <root@pam!terraform> end task UPID:tf-proxmox:000837B6:0166757B:6380722D:qmstart:302:root@pam!terraform: VM 302 already running
Nov 25 08:43:43 tf-proxmox pvedaemon[383213]: <root@pam!terraform> starting task UPID:tf-proxmox:000837CA:01667648:6380722F:qmstart:302:root@pam!terraform:
Nov 25 08:43:43 tf-proxmox pvedaemon[538570]: start VM 302: UPID:tf-proxmox:000837CA:01667648:6380722F:qmstart:302:root@pam!terraform:
Nov 25 08:43:43 tf-proxmox pvedaemon[538570]: VM 302 already running
Nov 25 08:43:43 tf-proxmox pvedaemon[383213]: <root@pam!terraform> end task UPID:tf-proxmox:000837CA:01667648:6380722F:qmstart:302:root@pam!terraform: VM 302 already running

Proxmox version: 7.3-3 Terraform version: 2.9.11

Edit: I found a solution. For me it was a missing efidisk location. I use Packer to automatically create templates, which I use to deploy VMs with terraform. I forgot to set the option efidisk to define a location for the EFI-Disk of my machines which is necessary for ovmf bios. For whatever reason, without this efidisk, proxmox tried to start the VM twice which generates the error "VM already running". Hope this helps anyone.

mjbright commented 1 year ago

2.8

Thanks for this, I'm getting better results having changed back to 2.8.0

sebdanielsson commented 1 year ago

Is there a way to define an efidisk with this module?

In the CLI I usually do this: qm set 1000 -efidisk0 local-lvm:0,format=raw,efitype=4m,pre-enrolled-keys=1

But I can't find a way to do this with this module.

goffinf commented 1 year ago

proxmox: 7.3-4 terraform core: 1.3.4 terraform proxmox provider: Telmate/proxmox: 2.9.11

This just happened to me also (Error: VMxxx is already running) when creating multiple VMs. All VMs appeared to be created successfully, however, when running the same pipeline again, Terraform marked all the VMs as tainted and therefore wanted to replace them ... not much use from a IaC perspective.

I had iothread set to 1 for my single SCSI disk, and, when changing that back to 0 the errors disappeared (as mentioned above).

terraform.tfvars

disk_info = {
    discard = "on"
    iothread = 1
    size = "20G"
    slot = 0
    ssd = 0
    storage = "local-lvm"
    type = "scsi"
}

changed to this did the trick:

disk_info = {
    discard = "on"
    iothread = 0
    size = "20G"
    slot = 0
    ssd = 0
    storage = "local-lvm"
    type = "scsi"
}

HtHs

Fraser.

Syntax3rror404 commented 1 year ago

@mleone87 as you can see many of us have this exact same issue. Can you please take a look into that? A fix would be awsome.

sebdanielsson commented 1 year ago

Are there any updates on this error message or do you know a workaround? I'm getting this too and my gitlab pipeline is always failing because of that. But I need the VM to start automatically when it is created. iothread is set to default (0) and if I enable agent I get another error message. Here is an extract from my proxmox log if this helps:

Nov 25 08:42:52 tf-proxmox pvedaemon[168475]: <root@pam!terraform> end task UPID:tf-proxmox:0008366C:01665026:638071CE:qmclone:1003:root@pam!terraform: OK
Nov 25 08:43:28 tf-proxmox pvedaemon[383213]: <root@pam!terraform> update VM 302: -agent 0 -bios ovmf -boot order=scsi0;ide2;net0;ide0;sata0 -cores 4 -cpu host -delete balloon, vcpus, vga -description srv-win2012r2, generated by terraform at 2022-11-25 07:41:38 -hotplug network,disk,usb -kvm 1 -memory 8192 -name srv-win2012r2 -net0 virtio=36:61:8D:5B:BA:E9,bridge=vmbr0 -numa 0 -onboot 0 -scsi0 local-lvm:vm-302-disk-0,size=40G,format=raw -scsihw virtio-scsi-pci -sockets 1 -tablet 1
Nov 25 08:43:28 tf-proxmox pvedaemon[383213]: <root@pam!terraform> starting task UPID:tf-proxmox:0008376B:01667089:63807220:qmconfig:302:root@pam!terraform:
Nov 25 08:43:28 tf-proxmox pvedaemon[538475]: cannot delete 'balloon' - not set in current configuration!
Nov 25 08:43:28 tf-proxmox pvedaemon[538475]: cannot delete 'vcpus' - not set in current configuration!
Nov 25 08:43:28 tf-proxmox pvedaemon[538475]: cannot delete 'vga' - not set in current configuration!
Nov 25 08:43:29 tf-proxmox pvedaemon[383213]: <root@pam!terraform> end task UPID:tf-proxmox:0008376B:01667089:63807220:qmconfig:302:root@pam!terraform: OK
Nov 25 08:43:37 tf-proxmox pvedaemon[168475]: <root@pam!terraform> starting task UPID:tf-proxmox:00083788:016673E5:63807229:qmstart:302:root@pam!terraform:
Nov 25 08:43:37 tf-proxmox pvedaemon[538504]: start VM 302: UPID:tf-proxmox:00083788:016673E5:63807229:qmstart:302:root@pam!terraform:
Nov 25 08:43:37 tf-proxmox systemd[1]: Started 302.scope.
Nov 25 08:43:37 tf-proxmox systemd-udevd[538524]: Using default interface naming scheme 'v247'.
Nov 25 08:43:37 tf-proxmox systemd-udevd[538524]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 25 08:43:38 tf-proxmox kernel: device tap302i0 entered promiscuous mode
Nov 25 08:43:38 tf-proxmox kernel: vmbr0: port 4(tap302i0) entered blocking state
Nov 25 08:43:38 tf-proxmox kernel: vmbr0: port 4(tap302i0) entered disabled state
Nov 25 08:43:38 tf-proxmox kernel: vmbr0: port 4(tap302i0) entered blocking state
Nov 25 08:43:38 tf-proxmox kernel: vmbr0: port 4(tap302i0) entered forwarding state
Nov 25 08:43:38 tf-proxmox pvedaemon[168475]: <root@pam!terraform> end task UPID:tf-proxmox:00083788:016673E5:63807229:qmstart:302:root@pam!terraform: WARNINGS: 1
Nov 25 08:43:41 tf-proxmox pvedaemon[538550]: start VM 302: UPID:tf-proxmox:000837B6:0166757B:6380722D:qmstart:302:root@pam!terraform:
Nov 25 08:43:41 tf-proxmox pvedaemon[382553]: <root@pam!terraform> starting task UPID:tf-proxmox:000837B6:0166757B:6380722D:qmstart:302:root@pam!terraform:
Nov 25 08:43:41 tf-proxmox pvedaemon[538550]: VM 302 already running
Nov 25 08:43:41 tf-proxmox pvedaemon[382553]: <root@pam!terraform> end task UPID:tf-proxmox:000837B6:0166757B:6380722D:qmstart:302:root@pam!terraform: VM 302 already running
Nov 25 08:43:43 tf-proxmox pvedaemon[383213]: <root@pam!terraform> starting task UPID:tf-proxmox:000837CA:01667648:6380722F:qmstart:302:root@pam!terraform:
Nov 25 08:43:43 tf-proxmox pvedaemon[538570]: start VM 302: UPID:tf-proxmox:000837CA:01667648:6380722F:qmstart:302:root@pam!terraform:
Nov 25 08:43:43 tf-proxmox pvedaemon[538570]: VM 302 already running
Nov 25 08:43:43 tf-proxmox pvedaemon[383213]: <root@pam!terraform> end task UPID:tf-proxmox:000837CA:01667648:6380722F:qmstart:302:root@pam!terraform: VM 302 already running

Proxmox version: 7.3-3 Terraform version: 2.9.11

Edit: I found a solution. For me it was a missing efidisk location. I use Packer to automatically create templates, which I use to deploy VMs with terraform. I forgot to set the option efidisk to define a location for the EFI-Disk of my machines which is necessary for ovmf bios. For whatever reason, without this efidisk, proxmox tried to start the VM twice which generates the error "VM already running". Hope this helps anyone.

Sorry for off-topic but could you show how do add an EFI-Disk in Terraform?

altmeista commented 1 year ago

Sorry for off-topic but could you show how do add an EFI-Disk in Terraform?

I don‘t know if it is possible in terraform but if you use templates for cloning, I think you have to make sure, that the efidisk location is set there already.

michaelfranzl commented 1 year ago

I see the same issue when creating VMs in Proxmox 7.3-4, Terraform v1.3.7, telmate/proxmox v2.9.11.

Error: VM 100 already running

Same issue when creating more than 1 VM. Setting iothread = 0 did not help. Manual cloning using Proxmox UI does not exhibit this issue.

The reason seems to be a Linux kernel panic which I can see via the Console.

After the first start of the machine, there is no more panic and the VM works as expected.

piyoki commented 1 year ago

I am experiencing the same issue ATM. Please help. cc @mptm436

cr0t commented 1 year ago

I have the same VM ... already running issue with telmate/proxmox 2.9.3, Proxmox 7.3-6

If I set iothread = 1 I get the error during terraform apply, if I remove this line from the .tf-file, Terraform finishes successfully.

piyoki commented 1 year ago

Any updates?

tuxthepenguin84 commented 1 year ago

In 2.9.14 changing to iothread = 0 fixes the vm already running issue, but now it get's stuck in "Still creating..."

ke1satsu commented 1 year ago

I also get the same bug on 2.9.14 with iothread = 1, setting iothread to 0 fixes the error.

andyfore commented 1 year ago

I had this issue when using the following version combinations:

Provider: telmate/proxmox v2.9.0 Terraform: Terraform v1.4.5 Controller OS: Ubuntu 22.04.2 LTS Proxmox OS Release: Debian GNU/Linux 11 (bullseye) Proxmox Kernel: 5.15.104-1-pve PVE Release: 7.4-3

After making the following change in my terraform disk stanza I was able to get a successful completion:

Before:

iothread=1

After:

iothread=0

ninjab3s commented 1 year ago

I was also running into this issue using omvf bios. Changing to seabios made it work again.

PhilipLutley commented 1 year ago

In case it helps anyone - check your disk type. I was getting the "VM already running" error and (thanks to this thread) took a closer look at my disk settings.

For some reason I'd set mine to "virtio" but I didn't realise the template I was building from was actually type "scsi"

disk {
    type = "scsi"
    storage = "local-lvm"
    size = "30G"
  }
Syntax3rror404 commented 1 year ago

In case it helps anyone - check your disk type. I was getting the "VM already running" error and (thanks to this thread) took a closer look at my disk settings.

For some reason I'd set mine to "virtio" but I didn't realise the template I was building from was actually type "scsi"

disk {
    type = "scsi"
    storage = "local-lvm"
    size = "30G"
  }

There are some disadvantages if you do not use virtio. Using only the scsi type means that you have to emulate the HDDs instead of letting them access the hardware directly like under virtio.

So this is not the right solution.

Source: https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers

fuomag9 commented 1 year ago

Still happening, I had to disable iothread as well

frozenfoxx commented 1 year ago

Still happening here, as well.

switzer60 commented 1 year ago

If you don't want to change iothreads to 0, another option might be this: scsihw = "virtio-scsi-single"

TheFrisianClause commented 1 year ago

Still an issue here as well. When iothread is on 1 I get the ' Error VMID is already' running. But when the iothread is on 0 I get a 'succes' message.

rdm0991 commented 1 year ago

I too had these errors, applying the iothread =0 and changing to scsi-single resolved the error. However I have another issue to get resolved, although the qemu agent is enabled, after VM start up it gets disabled. qemu agent not running comes up if reboot VMs. Any idea about changes to be added?

sevenrussian commented 1 year ago

oncreate=false

frozenfoxx commented 1 year ago

In my case with 2.9.14 this was solved with setting agent = 0 even if the agent is already installed and running in the image.

luispabon commented 1 year ago

Same here, changing the VM's template disk to iothread=0 works around the problem. It's not a solution, without this option i/o heavy workloads can deadlock any VM running on the host.

dR3b commented 1 year ago

Jepp iothread=0 works for me.

github-actions[bot] commented 1 year ago

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

sebdanielsson commented 1 year ago

/keepopen

andreilogi commented 1 year ago

add :

scsihw = "virtio-scsi-single"

as virtual threads do not work with virtio-scsi-pci

loveablecabbage commented 1 year ago

Still an issue :(

Proxmox 7.4-17 Terraform v1.6.1 telmate/proxmox v2.9.14

terraform.log

main.tf.txt variables.tf.txt

johnprakashgithub commented 1 year ago

Still seeing this issue in :( :(

Proxmox 8.0.4 Terraform v1.6.2 telmate/proxmox v2.9.14

tuxthepenguin84 commented 1 year ago

For those of you with issues, I've had really good luck with v2.8.0 for a while now. https://github.com/Telmate/terraform-provider-proxmox/releases/tag/v2.8.0

johnprakashgithub commented 1 year ago

2.8.0 gives many more issue of unsupported arguments like oncreate, timeout etc.

github-actions[bot] commented 10 months ago

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

hestiahacker commented 10 months ago

Can someone provide a complete, but minimal terraform file that will allow me to reproduce this issue?

Also, if there's any indication as to approximately how often this happens (e.g. 1 in 3 deployments vs 1 in 20 deployments) that would help me know if I just need to try redeploying the same thing more times or if I can't reproduce it.

With that info, I can attempt to reproduce the issue and then determine if the work being done for #887 will correct this issue.

Also, I see this issue was opened several years ago and still hasn't been resolved. I know exactly how frustrating that is! I just got more heavily involved with this project in the past few weeks and will do my best to get this fixed for everyone, ideally in the next release (2.9.15).

FYSA, there is also talk about potentially forking this repo to allow more people to be able to merge in fixes. See the comments on #884 if you want more info on that effort. Hopefully that ticket will be resolved in some way that allows development to continue and we can work through the backlog of issues like this one, be it here in this repo or in a fork of it.

In my opinion, if a VM can be created manually in Proxmox with iothread=1, then this provider should be able to do so as well. While a workaround may lower the priority, it doesn't mean that the issue should not be taken up. My goal is to get this provider to a point where everyone can be running the latest release and not have to be version locked on an old release.

luispabon commented 10 months ago

@hestiahacker It happens every time

Proxmox 8.0.x, the plugin is not currently compatible with 8.1.x.

resource "proxmox_vm_qemu" "yolo" {
  target_node = var.proxmox_host

  name = "yolo"

  vmid = "1337"

  clone   = "debian-12-cloudinit"
  agent   = 1
  qemu_os = "l26"

  cores  = 2
  memory = 1024

  scsihw = "virtio-scsi-single"

  onboot   = true
  vm_state = "running"

  network {
    model    = "virtio"
    bridge   = "vmbr0"
    firewall = true
  }

  disk {
    size     = "9G"
    storage  = "local-lvm"
    type     = "scsi"
    iothread = 1
  }
}
hestiahacker commented 10 months ago

Thanks for the example. I tested that terraform with the branch over at #887 against Proxmox 7.4-17 and it worked fine, however I was not able to reproduce the issue with the latest release (v2.9.14) so I'm not really sure whether this upcoming release will fix the issue or not.

I had to comment out the vm_state = "running" line because if that's left in there, I got this error:

│ Error: Unsupported argument
│ 
│   on main.tf line 19, in resource "proxmox_vm_qemu" "yolo":
│   19:   vm_state = "running"
│ 
│ An argument named "vm_state" is not expected here.

Apart from commenting out that line, everything else is identical to your example. However, I did notice that the disk in my yolo VM was a virtio device (which is what my template VM is using) instead of a scsi device, and that I did have a SCSI device, but it was in addition to the virtio device instead of in place of it.

Do you know if I need to create a new base image which uses a scsi device to reproduce this issue? If so I can give it a try, but it'll likely be a few days before I can get to it.

hestiahacker commented 10 months ago

To be clear, the disk is only duplicated if with the v2.9.14 release. With the unreleased version it does not duplicate the disk, however it is a virtio disk, and iothread is set to 0. Anyway, let me know if I should create a new base image and try to duplicate the issue again.

luispabon commented 10 months ago

How about we wait for that release and put this in the back burner, then circle back when it's out to see if this is still an issue? Just to avoid blocking it further.

For ref this is the hardware for that VM I'm cloning:

image

Does the new release in any way address proxmox 8.1 compat btw?

hestiahacker commented 9 months ago

My test server is running Proxmox 7.4-17, so I am not able to reproduce this issue, however I can confirm that v3.0.1-rc1 can deploy VMs with iothread explicitly set (which is the workaround for this issue). I don't have a Proxmox 8.1 server, so I'm afraid I can't test this on that version.

Aside: If someone wants to give me access to a test instance of Proxmox 8.1 for trying to reproduce this issue, I'd be happy to give it a try and report back. I just can't afford to have multiple test servers and from what I understand Proxmox can never be rolled back, so if I upgrade to 8.1, I can never go back to 7.x or 8.0 again, short of a complete reinstallation.

The new provider version requires a change to the terraform file. The punchline on the new format is that the disks section needs to look something like this:

  disks {
    virtio {
      virtio0 {
        disk {
          size            = 10
          cache           = "writeback"
          storage         = "local-zfs"
          iothread        = false
        }
      }
    }
  }

(and of course you can replace virtio with scsi if that's what your base image uses).

Terraform files

Here are the files I was working with on my first test (before changing the definition of the disks):

provider.tf

terraform {
  required_providers {
    proxmox = {
      #source = "registry.example.com/telmate/proxmox"
      #version = ">=1.0.0"
      source = "Telmate/proxmox"
      #version = "=2.9.11"
      version = "=3.0.1-rc1"
    }
  }
  required_version = ">= 0.14"
}

main.tf

# From https://github.com/Telmate/terraform-provider-proxmox/issues/460#issuecomment-1884665830
resource "proxmox_vm_qemu" "yolo" {
  target_node = var.proxmox_host

  name = "yolo"

  vmid = "1337"

  clone   = "debian-12"
  agent   = 1
  qemu_os = "l26"

  cores  = 2
  memory = 1024

  scsihw = "virtio-scsi-single"

  onboot   = true
#  vm_state = "running"

  network {
    model    = "virtio"
    bridge   = "vmbr0"
    firewall = true
  }

  disk {
    size     = "10G"
#    storage  = "local-lvm"
    storage  = "local-zfs"
    type     = "scsi"
    iothread = 1
  }
}

And vars.tf just defines the proxmox_host variable to be the name of my proxmox node.