Telmate / terraform-provider-proxmox

Terraform provider plugin for proxmox
MIT License
2.2k stars 532 forks source link

Proxmox: The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange #971

Closed MoustafaAMahmoud closed 4 months ago

MoustafaAMahmoud commented 7 months ago

Hello Everyone,

I am trying to install qemu VM as HAProxy load balancer on Proxmox.

I am getting the below error, and i tried to reduce the code to find the error. The errors are not clear

Proxmox version is 8.1.4

Error:

│ Error: Plugin did not respond
│
│   with proxmox_vm_qemu.haproxy[1],
│   on main.tf line 3, in resource "proxmox_vm_qemu" "haproxy":
│    3: resource "proxmox_vm_qemu" "haproxy" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain
│ more details.
╵
╷
│ Error: Plugin did not respond
│
│   with proxmox_vm_qemu.haproxy[0],
│   on main.tf line 3, in resource "proxmox_vm_qemu" "haproxy":
│    3: resource "proxmox_vm_qemu" "haproxy" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain
│ more details.
Stack trace from the terraform-provider-proxmox_v2.9.11 plugin:

panic: interface conversion: interface {} is string, not float64

goroutine 55 [running]:

My versions are:

terraform {
  required_version = ">= 1.7.0"
}

providers.tf


terraform {
    required_providers {
        proxmox = {
            source  = "telmate/proxmox"
            version = "2.9.11"
        }
        pass = {
            source  = "camptocamp/pass"
            version = "2.0.0"
        }
    }
}

# Create the HAProxy load balancer VM #
resource "proxmox_vm_qemu" "haproxy" {
  count       = length(var.vm_haproxy_ips)
  name        = "${var.vm_name_prefix}-haproxy-${count.index}"
  target_node = var.proxmox_node

  clone         = var.vm_template
  agent         = 1
  tags        = var.vm_tags

  ssh_user    = local.ssh_user
  ciuser      = local.ssh_user
  cipassword  = local.ssh_password

  os_type     = "cloud-init"
  sockets     = var.vm_sockets
  cores       = var.vm_haproxy_cores
  vcpus       = var.vm_sockets * var.vm_haproxy_cores
  cpu         = "host"
  memory      = var.vm_haproxy_max_ram
  balloon     = var.vm_haproxy_min_ram
  full_clone  = var.vm_full_clone
  onboot      = true

  network {
    model  = "virtio"
    bridge = var.vm_network_bridge
  }

  disk {
    type         = var.vm_disk_type
    size         = var.vm_haproxy_size
    storage      = var.vm_storage
  }

  ipconfig0    = "ip=${var.vm_haproxy_ips[count.index]}/${var.vm_netmask},gw=${var.vm_gateway}"
  searchdomain = var.vm_searchdomain
  nameserver   = var.vm_dns

  sshkeys = var.vm_sshkeys
}

# Extra args for ansible playbooks #
locals {
  ssh_user = var.vm_ssh_user != null ? var.vm_ssh_user : data.pass_password.ci_user.password
  ssh_password = var.vm_ssh_user_password != null ? var.vm_ssh_user_password : data.pass_password.ci_pass.password

  extra_args = {
    ubuntu = "-T 300"
    debian = "-T 300"
    centos = "-T 300"
    rhel   = "-T 300"
  }
}

Plan output


 # proxmox_vm_qemu.haproxy[0] will be created
  + resource "proxmox_vm_qemu" "haproxy" {
      + additional_wait           = 0
      + agent                     = 1
      + automatic_reboot          = true
      + balloon                   = 1536
      + bios                      = "seabios"
      + boot                      = "c"
      + bootdisk                  = (known after apply)
      + cipassword                = (sensitive value)
      + ciuser                    = "moustafa"
      + clone                     = "ubuntu-cloud-22.04"
      + clone_wait                = 0
      + cores                     = 1
      + cpu                       = "host"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + force_create              = false
      + full_clone                = true
      + guest_agent_ready_timeout = 100
      + hotplug                   = "network,disk,usb"
      + id                        = (known after apply)
      + ipconfig0                 = "ip=192.168.0.23/24,gw=192.168.0.1"
      + kvm                       = true
      + memory                    = 2048
      + name                      = "mcube-haproxy-0"
      + nameserver                = "192.168.0.15"
      + numa                      = false
      + onboot                    = true
      + oncreate                  = true
      + os_type                   = "cloud-init"
      + preprovision              = true
      + reboot_required           = (known after apply)
      + scsihw                    = (known after apply)
      + searchdomain              = "moustafa.uk"
      + sockets                   = 2
      + ssh_host                  = (known after apply)
      + ssh_port                  = (known after apply)
      + ssh_user                  = "moustafa"
      + sshkeys                   = "ssh-rsa some id"
      + tablet                    = true
      + tags                      = "k8s"
      + target_node               = "sun"
      + unused_disk               = (known after apply)
      + vcpus                     = 2
      + vlan                      = -1
      + vmid                      = (known after apply)

      + disk {
          + backup             = 0
          + cache              = "none"
          + file               = (known after apply)
          + format             = (known after apply)
          + iops               = 0
          + iops_max           = 0
          + iops_max_length    = 0
          + iops_rd            = 0
          + iops_rd_max        = 0
          + iops_rd_max_length = 0
          + iops_wr            = 0
          + iops_wr_max        = 0
          + iops_wr_max_length = 0
          + iothread           = 0
          + mbps               = 0
          + mbps_rd            = 0
          + mbps_rd_max        = 0
          + mbps_wr            = 0
          + mbps_wr_max        = 0
          + media              = (known after apply)
          + replicate          = 0
          + size               = "32G"
          + slot               = (known after apply)
          + ssd                = 0
          + storage            = "local-lvm"
          + storage_type       = (known after apply)
          + type               = "scsi"
          + volume             = (known after apply)
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = -1
        }
    }

  # proxmox_vm_qemu.haproxy[1] will be created
  + resource "proxmox_vm_qemu" "haproxy" {
      + additional_wait           = 0
      + agent                     = 1
      + automatic_reboot          = true
      + balloon                   = 1536
      + bios                      = "seabios"
      + boot                      = "c"
      + bootdisk                  = (known after apply)
      + cipassword                = (sensitive value)
      + ciuser                    = "moustafa"
      + clone                     = "ubuntu-cloud-22.04"
      + clone_wait                = 0
      + cores                     = 1
      + cpu                       = "host"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + force_create              = false
      + full_clone                = true
      + guest_agent_ready_timeout = 100
      + hotplug                   = "network,disk,usb"
      + id                        = (known after apply)
      + ipconfig0                 = "ip=192.168.0.24/24,gw=192.168.0.1"
      + kvm                       = true
      + memory                    = 2048
      + name                      = "mcube-haproxy-1"
      + nameserver                = "192.168.0.15"
      + numa                      = false
      + onboot                    = true
      + oncreate                  = true
      + os_type                   = "cloud-init"
      + preprovision              = true
      + reboot_required           = (known after apply)
      + scsihw                    = (known after apply)
      + searchdomain              = "moustafa.uk"
      + sockets                   = 2
      + ssh_host                  = (known after apply)
      + ssh_port                  = (known after apply)
      + ssh_user                  = "moustafa"
      + sshkeys                   = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBwHO4D9NdmnRYVq84HDdjZCS/2GDbs6HQVLVXMDFRc+lRRVK0PXdoH1lKdHLCLWdZhA4UC0e28WaBudVXMGS9kJnQj2k1ix2Z40QnORm3A4rqVD9V2S9TAFODRP2T43obwkq4568s2L+s+YpdK11r25ZtM5JOSUQyiXm/XwD3dSd38No6eYJQOQ+6KGZP8gbOUZ/TlUx7CN6dbZsGNm/t/CGdnV/5Yv4+Ae1lYldIk8c4HB+q8dEH/4qUfGnZVcTIifYeah8gW2ImFo/RQC9Tp8lGzZnZQY/PuAw1ATTCvtZuE9ifXN72Vl7OCwWGqgWjI2GGxLPKud8t6nSz82/eOtpiiGvDcg/kRZtFQgCDxM2/pZPTzsLitFCL11ae6eZiUCEFME2wSjpxcJZLJnjJgsBBTZLhhGeZ4ey2rDbbHmb1zXHEPE1wyFtfPNRMMNPviUObcj/0fup0Fqabpm4BUTtr4QGr0UtOHJNE/VzVMcoPi1Zc43WUbG5Jd6tGfyk= mostafaalaa@1-Tech-MostafaAlaa.local"
      + tablet                    = true
      + tags                      = "k8s"
      + target_node               = "sun"
      + unused_disk               = (known after apply)
      + vcpus                     = 2
      + vlan                      = -1
      + vmid                      = (known after apply)

      + disk {
          + backup             = 0
          + cache              = "none"
          + file               = (known after apply)
          + format             = (known after apply)
          + iops               = 0
          + iops_max           = 0
          + iops_max_length    = 0
          + iops_rd            = 0
          + iops_rd_max        = 0
          + iops_rd_max_length = 0
          + iops_wr            = 0
          + iops_wr_max        = 0
          + iops_wr_max_length = 0
          + iothread           = 0
          + mbps               = 0
          + mbps_rd            = 0
          + mbps_rd_max        = 0
          + mbps_wr            = 0
          + mbps_wr_max        = 0
          + media              = (known after apply)
          + replicate          = 0
          + size               = "32G"
          + slot               = (known after apply)
          + ssd                = 0
          + storage            = "local-lvm"
          + storage_type       = (known after apply)
          + type               = "scsi"
          + volume             = (known after apply)
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = -1
        }
    }

Plan: 2 to add, 0 to change, 8 to destroy.
local_file.keepalived_slave: Destroying... [id=2272df5fc6ab9289b167caa6bd706fbb2e396eeb]
null_resource.kubespray_download: Destroying... [id=6897062667547635289]
local_file.keepalived_master: Destroying... [id=741df5ad3097857b917566c16a9a76d4d0dd40eb]
local_file.keepalived_master: Destruction complete after 0s
null_resource.config_permission: Destroying... [id=3574222891898490847]
null_resource.kubespray_download: Destruction complete after 0s
local_file.keepalived_slave: Destruction complete after 0s
null_resource.config_permission: Destruction complete after 0s
local_file.kubespray_k8s_cluster: Destroying... [id=469322509fceb7912f610f76f2e1e6541cc88f04]
local_file.kubespray_all: Destroying... [id=e8a3a6aeee5e15634f68de062661f44da2583467]
proxmox_vm_qemu.haproxy[1]: Creating...
local_file.kubespray_hosts: Destroying... [id=22f8a3a6b07704998ca820fc61849cac379e344f]
local_file.kubespray_k8s_cluster: Destruction complete after 0s
proxmox_vm_qemu.haproxy[0]: Creating...
local_file.haproxy: Destroying... [id=72e5ef0066c3be5424ac2eb910002a30e7251e3d]
local_file.kubespray_all: Destruction complete after 0s
local_file.haproxy: Destruction complete after 0s
local_file.kubespray_hosts: Destruction complete after 0s
╷
│ Error: Plugin did not respond
│
│   with proxmox_vm_qemu.haproxy[1],
│   on main.tf line 3, in resource "proxmox_vm_qemu" "haproxy":
│    3: resource "proxmox_vm_qemu" "haproxy" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain
│ more details.
╵
╷
│ Error: Plugin did not respond
│
│   with proxmox_vm_qemu.haproxy[0],
│   on main.tf line 3, in resource "proxmox_vm_qemu" "haproxy":
│    3: resource "proxmox_vm_qemu" "haproxy" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain
│ more details.
╵

Stack trace from the terraform-provider-proxmox_v2.9.11 plugin:

panic: interface conversion: interface {} is string, not float64

goroutine 55 [running]:
github.com/Telmate/proxmox-api-go/proxmox.NewConfigQemuFromApi(0xc000127800, 0xc00000c0a8?)
    github.com/Telmate/proxmox-api-go@v0.0.0-20220818102740-0129fa923095/proxmox/config_qemu.go:579 +0x4774
github.com/Telmate/terraform-provider-proxmox/proxmox.prepareDiskSize(0x0?, 0xc0003c1270?, 0xf?)
    github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1617 +0xeb
github.com/Telmate/terraform-provider-proxmox/proxmox.resourceVmQemuCreate(0xc0000c1600, {0xb2bec0?, 0xc0000b0050})
    github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:997 +0x178a
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xd847f0?, {0xd847f0?, 0xc0000af4a0?}, 0xd?, {0xb2bec0?, 0xc0000b0050?})
    github.com/hashicorp/terraform-plugin-sdk/v2@v2.21.0/helper/schema/resource.go:695 +0x178
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc00018ae00, {0xd847f0, 0xc0000af4a0}, 0xc0004496c0, 0xc0000c1480, {0xb2bec0, 0xc0000b0050})
    github.com/hashicorp/terraform-plugin-sdk/v2@v2.21.0/helper/schema/resource.go:837 +0xa7a
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc0002fe570, {0xd847f0?, 0xc0000af380?}, 0xc000460910)
    github.com/hashicorp/terraform-plugin-sdk/v2@v2.21.0/helper/schema/grpc_provider.go:1021 +0xe3c
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc00020fb80, {0xd847f0?, 0xc0000ae990?}, 0xc0000b4690)
    github.com/hashicorp/terraform-plugin-go@v0.14.0/tfprotov5/tf5server/server.go:818 +0x574
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0xc29f00?, 0xc00020fb80}, {0xd847f0, 0xc0000ae990}, 0xc0000b4620, 0x0)
    github.com/hashicorp/terraform-plugin-go@v0.14.0/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002681e0, {0xd87268, 0xc000003520}, 0xc00039afc0, 0xc00024f050, 0x1212d00, 0x0)
    google.golang.org/grpc@v1.48.0/server.go:1295 +0xb0b
google.golang.org/grpc.(*Server).handleStream(0xc0002681e0, {0xd87268, 0xc000003520}, 0xc00039afc0, 0x0)
    google.golang.org/grpc@v1.48.0/server.go:1636 +0xa1b
google.golang.org/grpc.(*Server).serveStreams.func1.2()
    google.golang.org/grpc@v1.48.0/server.go:932 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
    google.golang.org/grpc@v1.48.0/server.go:930 +0x28a

Error: The terraform-provider-proxmox_v2.9.11 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

I tried to upgrade to the latest rc 3.0.1-rc1 and i got similar error below

╷
│ Error: Plugin did not respond
│
│   with provider["registry.terraform.io/telmate/proxmox"],
│   on provider.tf line 20, in provider "proxmox":
│   20: provider "proxmox" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ConfigureProvider call. The plugin logs may contain more
│ details.
r2d2k commented 7 months ago

Have same problem.

Resource:

resource "proxmox_vm_qemu" "k8s_worker" {
  name              = "vm-k8s-worker-1"
  target_node       = "pve-2"
}

Plan:

  # proxmox_vm_qemu.k8s_worker will be created
  + resource "proxmox_vm_qemu" "k8s_worker" {
      + additional_wait           = 5
      + automatic_reboot          = true
      + balloon                   = 0
      + bios                      = "seabios"
      + boot                      = (known after apply)
      + bootdisk                  = (known after apply)
      + clone_wait                = 10
      + cores                     = 1
      + cpu                       = "host"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + force_create              = false
      + full_clone                = true
      + guest_agent_ready_timeout = 100
      + hotplug                   = "network,disk,usb"
      + id                        = (known after apply)
      + kvm                       = true
      + memory                    = 512
      + name                      = "vm-k8s-worker-1"
      + nameserver                = (known after apply)
      + onboot                    = false
      + oncreate                  = true
      + preprovision              = true
      + reboot_required           = (known after apply)
      + scsihw                    = "lsi"
      + searchdomain              = (known after apply)
      + sockets                   = 1
      + ssh_host                  = (known after apply)
      + ssh_port                  = (known after apply)
      + tablet                    = true
      + target_node               = "pve-2"
      + unused_disk               = (known after apply)
      + vcpus                     = 0
      + vlan                      = -1
      + vmid                      = (known after apply)
    }

Result:

2024-03-27T17:58:53.014Z [ERROR] plugin.(*GRPCProvider).ApplyResourceChange: error="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-03-27T17:58:53.120Z [ERROR] vertex "proxmox_vm_qemu.k8s_worker" error: Plugin did not respond
╷
│ Error: Plugin did not respond
│
│   with proxmox_vm_qemu.k8s_worker,
│   on main.tf line 1, in resource "proxmox_vm_qemu" "k8s_worker":
│    1: resource "proxmox_vm_qemu" "k8s_worker" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵

Stack trace from the terraform-provider-proxmox_v2.9.14 plugin:

panic: interface conversion: interface {} is string, not float64

goroutine 45 [running]:
github.com/Telmate/proxmox-api-go/proxmox.NewConfigQemuFromApi(0xc0002b6060, 0x0?)
        github.com/Telmate/proxmox-api-go@v0.0.0-20230319185744-e7cde7198cdf/proxmox/config_qemu.go:584 +0x4605
github.com/Telmate/terraform-provider-proxmox/proxmox.prepareDiskSize(0x0?, 0xc0000cab70?, 0xf?, 0x0?)
        github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1737 +0xeb
github.com/Telmate/terraform-provider-proxmox/proxmox.resourceVmQemuCreate(0xc000280900, {0xb66f60?, 0xc0004460a0})
        github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1059 +0x1cb2
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xdd7840?, {0xdd7840?, 0xc000297350?}, 0xd?, {0xb66f60?, 0xc0004460a0?})
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:695 +0x178
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc000110ee0, {0xdd7840, 0xc000297350}, 0xc00031e410, 0xc000280780, {0xb66f60, 0xc0004460a0})
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:837 +0xa85
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc00039be90, {0xdd7840?, 0xc000297230?}, 0xc000218960)
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/grpc_provider.go:1021 +0xe8d
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc000252320, {0xdd7840?, 0xc000296840?}, 0xc00025d960)
        github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:818 +0x574
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0xc6bc20?, 0xc000252320}, {0xdd7840, 0xc000296840}, 0xc00025d8f0, 0x0)
        github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0001d81e0, {0xddb420, 0xc0000071e0}, 0xc00025bd40, 0xc0003a98f0, 0x128f7a0, 0x0)
        google.golang.org/grpc@v1.53.0/server.go:1336 +0xd23
google.golang.org/grpc.(*Server).handleStream(0xc0001d81e0, {0xddb420, 0xc0000071e0}, 0xc00025bd40, 0x0)
        google.golang.org/grpc@v1.53.0/server.go:1704 +0xa2f
google.golang.org/grpc.(*Server).serveStreams.func1.2()
        google.golang.org/grpc@v1.53.0/server.go:965 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
        google.golang.org/grpc@v1.53.0/server.go:963 +0x28a

Error: The terraform-provider-proxmox_v2.9.14 plugin crashed!
mattberjon commented 7 months ago

Hi, I have the same result as well.

Here the stack trace:

╷
│ Error: Plugin did not respond
│ 
│   with proxmox_vm_qemu.postgresql-prd-01,
│   on postgresql.tf line 1, in resource "proxmox_vm_qemu" "postgresql-prd-01":
│    1: resource "proxmox_vm_qemu" "postgresql-prd-01" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵

Stack trace from the terraform-provider-proxmox_v2.9.14 plugin:

panic: interface conversion: interface {} is string, not float64

goroutine 97 [running]:
github.com/Telmate/proxmox-api-go/proxmox.NewConfigQemuFromApi(0xc0004a0120, 0x0?)
        github.com/Telmate/proxmox-api-go@v0.0.0-20230319185744-e7cde7198cdf/proxmox/config_qemu.go:584 +0x4605
github.com/Telmate/terraform-provider-proxmox/proxmox.prepareDiskSize(0x0?, 0xc0002e8e50?, 0x10?, 0xc0002e8b94?)
        github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1737 +0xeb
github.com/Telmate/terraform-provider-proxmox/proxmox.resourceVmQemuCreate(0xc000310780, {0xb66f60?, 0xc000353360})
        github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1059 +0x1cb2
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xdd7840?, {0xdd7840?, 0xc00047af30?}, 0xd?, {0xb66f60?, 0xc000353360?})
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:695 +0x178
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc000196ee0, {0xdd7840, 0xc00047af30}, 0xc000452d00, 0xc000310b80, {0xb66f60, 0xc000353360})
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:837 +0xa85
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc000425ec0, {0xdd7840?, 0xc00047ae10?}, 0xc0004ea2d0)
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/grpc_provider.go:1021 +0xe8d
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc0002bc3c0, {0xdd7840?, 0xc00047a420?}, 0xc0001b4540)
        github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:818 +0x574
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0xc6bc20?, 0xc0002bc3c0}, {0xdd7840, 0xc00047a420}, 0xc0001b4000, 0x0)
        github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00025c1e0, {0xddb420, 0xc0002bb6c0}, 0xc0004c0000, 0xc000433a40, 0x128f7a0, 0x0)
        google.golang.org/grpc@v1.53.0/server.go:1336 +0xd23
google.golang.org/grpc.(*Server).handleStream(0xc00025c1e0, {0xddb420, 0xc0002bb6c0}, 0xc0004c0000, 0x0)
        google.golang.org/grpc@v1.53.0/server.go:1704 +0xa2f
google.golang.org/grpc.(*Server).serveStreams.func1.2()
        google.golang.org/grpc@v1.53.0/server.go:965 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
        google.golang.org/grpc@v1.53.0/server.go:963 +0x28a

Error: The terraform-provider-proxmox_v2.9.14 plugin crashed!
mattberjon commented 7 months ago

Not sure if it can help, but here a part of the trace:

proxmox_vm_qemu.postgresql-prd-01: Still creating... [30s elapsed]                                                                                                                                          
2024-04-03T21:13:05.487+0200 [TRACE] provider.terraform-provider-proxmox_v2.9.14: Served request: @caller=runtime/panic.go:884 @module=sdk.proto tf_provider_addr=registry.terraform.io/telmate/proxmox tf_r
esource_type=proxmox_vm_qemu tf_rpc=ApplyResourceChange tf_proto_version=5.3 tf_req_id=e309f81f-b485-72f2-b497-5b6c7a346daa timestamp="2024-04-03T21:13:05.486+0200"                                        
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: panic: interface conversion: interface {} is string, not float64                                                          
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14                                                                                                                            
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: goroutine 44 [running]:                                                                                                   
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: github.com/Telmate/proxmox-api-go/proxmox.NewConfigQemuFromApi(0xc00055a718, 0xc9d509?)                                   
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14:       github.com/Telmate/proxmox-api-go@v0.0.0-20230319185744-e7cde7198cdf/proxmox/config_qemu.go:584 +0x4605             
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: github.com/Telmate/terraform-provider-proxmox/proxmox.resourceVmQemuCreate(0xc000632400, {0xb66f60?, 0xc0001b7360})       
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14:       github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:972 +0x2c4d                               
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xdd7840?, {0xdd7840?, 0xc00024a150?}, 0xd?,
 {0xb66f60?, 0xc0001b7360?})                                                                                                                                                                                
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14:       github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:695 +0x178                           
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc00018aee0, {0xdd7840, 0xc00024a150}, 0xc00
046c750, 0xc000632c80, {0xb66f60, 0xc0001b7360})                                                                                                                                                            
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14:       github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:837 +0xa85                           
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc000411f20, {0xdd78
40?, 0xc00024a030?}, 0xc0003662d0)                                                                                                                                                                          
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14:       github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/grpc_provider.go:1021 +0xe8d                     
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc0002a03c0, {0xdd7840?, 0xc00
00b8f00?}, 0xc0001c01c0)                                                                                                                                                                                    
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14:       github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:818 +0x574                           
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0xc6bc20?, 0x
c0002a03c0}, {0xdd7840, 0xc0000b8f00}, 0xc0001c0000, 0x0)                                                                                                                                                   
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14:       github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170       
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002401e0, {0xddb420, 0xc0001024e0}, 0xc00049e240, 0xc00041f830, 0x128
f7a0, 0x0)                                                                                                                                                                                                  
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14:       google.golang.org/grpc@v1.53.0/server.go:1336 +0xd23                                                                
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: google.golang.org/grpc.(*Server).handleStream(0xc0002401e0, {0xddb420, 0xc0001024e0}, 0xc00049e240, 0x0)                  
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14:       google.golang.org/grpc@v1.53.0/server.go:1704 +0xa2f                                                                
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: google.golang.org/grpc.(*Server).serveStreams.func1.2()                                                                   
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14:       google.golang.org/grpc@v1.53.0/server.go:965 +0x98                                                                  
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14: created by google.golang.org/grpc.(*Server).serveStreams.func1                                                            
2024-04-03T21:13:05.489+0200 [DEBUG] provider.terraform-provider-proxmox_v2.9.14:       google.golang.org/grpc@v1.53.0/server.go:963 +0x28a                                                                 
2024-04-03T21:13:05.494+0200 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/telmate/proxmox/2.9.14/linux_amd64/terraform-provider-proxmox_v2.9.14 pid=2041698 erro
r="exit status 2"                                                                                                                                                                                           
2024-04-03T21:13:05.494+0200 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"                                            
2024-04-03T21:13:05.494+0200 [ERROR] plugin.(*GRPCProvider).ApplyResourceChange: error="rpc error: code = Unavailable desc = error reading from server: EOF"                                                
2024-04-03T21:13:05.494+0200 [TRACE] maybeTainted: proxmox_vm_qemu.postgresql-prd-01 encountered an error during creation, so it is now marked as tainted
2024-04-03T21:13:05.494+0200 [TRACE] terraform.contextPlugins: Schema for provider "registry.terraform.io/telmate/proxmox" is in the global cache
2024-04-03T21:13:05.494+0200 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for proxmox_vm_qemu.postgresql-prd-01
2024-04-03T21:13:05.494+0200 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: removing state object for proxmox_vm_qemu.postgresql-prd-01
2024-04-03T21:13:05.494+0200 [TRACE] evalApplyProvisioners: proxmox_vm_qemu.postgresql-prd-01 is tainted, so skipping provisioning
2024-04-03T21:13:05.494+0200 [TRACE] maybeTainted: proxmox_vm_qemu.postgresql-prd-01 was already tainted, so nothing to do
2024-04-03T21:13:05.494+0200 [TRACE] terraform.contextPlugins: Schema for provider "registry.terraform.io/telmate/proxmox" is in the global cache
2024-04-03T21:13:05.494+0200 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for proxmox_vm_qemu.postgresql-prd-01
2024-04-03T21:13:05.494+0200 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: removing state object for proxmox_vm_qemu.postgresql-prd-01
2024-04-03T21:13:05.494+0200 [TRACE] statemgr.Filesystem: not making a backup, because the new snapshot is identical to the old
2024-04-03T21:13:05.494+0200 [TRACE] statemgr.Filesystem: no state changes since last snapshot
2024-04-03T21:13:05.495+0200 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2024-04-03T21:13:05.497+0200 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
2024-04-03T21:13:05.497+0200 [ERROR] vertex "proxmox_vm_qemu.postgresql-prd-01" error: Plugin did not respond
2024-04-03T21:13:05.497+0200 [TRACE] vertex "proxmox_vm_qemu.postgresql-prd-01": visit complete, with errors
2024-04-03T21:13:05.497+0200 [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/telmate/proxmox\"] (close)" errored, so skipping
2024-04-03T21:13:05.497+0200 [TRACE] dag/walk: upstream of "root" errored, so skipping
2024-04-03T21:13:05.497+0200 [TRACE] statemgr.Filesystem: not making a backup, because the new snapshot is identical to the old
2024-04-03T21:13:05.497+0200 [TRACE] statemgr.Filesystem: no state changes since last snapshot
2024-04-03T21:13:05.497+0200 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
empereira commented 7 months ago

Any news on this error?

diverdale commented 6 months ago

I am getting the same error. Terraform 1.8.2 Proxmox 8.2.2

Stack Trace:

Stack trace from the terraform-provider-proxmox_v2.9.11 plugin:

panic: interface conversion: interface {} is string, not float64

goroutine 45 [running]:
github.com/Telmate/proxmox-api-go/proxmox.NewConfigQemuFromApi(0x14000896500, 0x14000586300?)
    github.com/Telmate/proxmox-api-go@v0.0.0-20220818102740-0129fa923095/proxmox/config_qemu.go:579 +0x3e90
github.com/Telmate/terraform-provider-proxmox/proxmox.resourceVmQemuCreate(0x14000458080, {0x103443f40?, 0x140000b6f00})
    github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:908 +0x1c7c
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0x103578708?, {0x103578708?, 0x1400035a2d0?}, 0xd?, {0x103443f40?, 0x140000b6f00?})
    github.com/hashicorp/terraform-plugin-sdk/v2@v2.21.0/helper/schema/resource.go:695 +0x138
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0x140002bc700, {0x103578708, 0x1400035a2d0}, 0x14000880820, 0x14000616300, {0x103443f40, 0x140000b6f00})
    github.com/hashicorp/terraform-plugin-sdk/v2@v2.21.0/helper/schema/resource.go:837 +0x874
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0x1400000d188, {0x103578708?, 0x1400035a1b0?}, 0x14000612000)
    github.com/hashicorp/terraform-plugin-sdk/v2@v2.21.0/helper/schema/grpc_provider.go:1021 +0xb94
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0x140000ca320, {0x103578708?, 0x1400078d110?}, 0x140000d2460)
    github.com/hashicorp/terraform-plugin-go@v0.14.0/tfprotov5/tf5server/server.go:818 +0x3c0
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x103542dc0?, 0x140000ca320}, {0x103578708, 0x1400078d110}, 0x140000d23f0, 0x0)
    github.com/hashicorp/terraform-plugin-go@v0.14.0/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x174
google.golang.org/grpc.(*Server).processUnaryRPC(0x140003c4000, {0x10357b148, 0x14000102d00}, 0x140002e9c20, 0x140003c6c60, 0x1039eeda0, 0x0)
    google.golang.org/grpc@v1.48.0/server.go:1295 +0x9d8
google.golang.org/grpc.(*Server).handleStream(0x140003c4000, {0x10357b148, 0x14000102d00}, 0x140002e9c20, 0x0)
    google.golang.org/grpc@v1.48.0/server.go:1636 +0x840
google.golang.org/grpc.(*Server).serveStreams.func1.2()
    google.golang.org/grpc@v1.48.0/server.go:932 +0x88
created by google.golang.org/grpc.(*Server).serveStreams.func1
    google.golang.org/grpc@v1.48.0/server.go:930 +0x298

Error: The terraform-provider-proxmox_v2.9.11 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

➜ proxmox terraform --version Terraform v1.8.2 on darwin_arm64

diverdale commented 6 months ago

same issue with plugin version 2.9.14 Error: The terraform-provider-proxmox_v2.9.14 plugin crashed!

github-actions[bot] commented 4 months ago

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

github-actions[bot] commented 4 months ago

This issue was closed because it has been inactive for 5 days since being marked as stale.