Open miberecz opened 7 months ago
Hi @miberecz 👋🏼
Sorry, I don't fully understand your use case. Are you trying to delete the existing VM? Or applying a change that causes a VM to be re-created (hence there is see "destroying..." in the output)?
Is the VM running when you call apply
second time?
Is the VM cloned from a template, or a standalone one?
You don't have agent
enabled, was it on purpose to trigger the timeout?
There are many different timeout in the code, and it is important to understand the use case to identify which one (or combination of them) do not work as expected. Could you please provide a bit more details?
Yes, sorry if I wasn't clear enough. I'm updating an existing VM in this example (CPU core count from 2 to 3 but can be any resource value) Yes, the VM is running the second time. Its a VM cloned from a template. Agent is disabled to trigger timeout. The original issue was that our network has problems sometimes and we have failed Pulumi runs because of that. Its really hard to do a re-try if you have to wait 1800 Sec in case of an issue. So I lowered the timeouts when I noticed the problem. To simulate a network issue, I just disable the Agent because I noticed that the behavior is the same in both cases.
Marking this issue as stale due to inactivity in the past 180 days. This helps us focus on the active issues. If this issue is reproducible with the latest version of the provider, please comment. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!
Describe the bug I use the provider through the Pulumi wrapper where I noticed an issue. Eventually we tracked down thats its an upstream issue: https://github.com/muhlba91/pulumi-proxmoxve/issues/266
If a timeout value set (e.g.
timeout_shutdown_vm
) its applies for the first time, but not after.To Reproduce Steps to reproduce the behavior:
After 180 sec, timeout occurs as it should:
proxmox_virtual_environment_vm.timeouttester9000: Destroying... [id=1301] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 10s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 20s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 30s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 40s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 50s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m0s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m10s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m20s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m30s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m40s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m50s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m0s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m10s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m20s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m30s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m40s elapsed] proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m50s elapsed] ╷ │ Error: error waiting for VM shutdown: error retrieving task status: received an HTTP 599 response - Reason: Too many redirections
Please also provide a minimal Terraform configuration that reproduces the issue.
Expected behavior Timeouts are consistent across runs.
TF_LOG=DEBUG terraform apply
):