Telmate / terraform-provider-proxmox

Terraform provider plugin for proxmox
MIT License
2.19k stars 531 forks source link

Duplicate disks created #887

Open vilhelmprytz opened 10 months ago

vilhelmprytz commented 10 months ago

I've had the same issues with Proxmox 8 as a lot of people (like #882 and #863), but when I build this project locally from master and use that, then both these problems are solved. However, a new problem arises. The provider is supposed to increase the size of the 2G base image disk that I have, but instead, it seems like it just detaches that disk and creates a new empty one (with the correct size). This new disk is of course not bootable, and the terraform apply command stalls.

image

Any idea if this is a known problem? It didn't seem to me like a duplicate of any other open issues right now.

Proxmox: 8.1.3 Provider: local build of latest commit, https://github.com/Telmate/terraform-provider-proxmox/commit/a8675d3967710bab4ac08fad9dbc05eed3ae2c58

everythings-gonna-be-alright commented 10 months ago

Cause pxapi.NewConfigQemuFromApi function now returns disks in Disks part of ConfigQemu struct. However, the code expects them in QemuDisks Disks *QemuStorages structure differs in format from QemuDisks QemuDevices

https://github.com/Telmate/terraform-provider-proxmox/blob/a8675d3967710bab4ac08fad9dbc05eed3ae2c58/proxmox/resource_vm_qemu.go#L1148C2-L1148C2

vilhelmprytz commented 10 months ago

So have you found a solution to this problem? I'm guessing it's something that needs to be fixed with the provider, and not the .tf configuration I've used.

everythings-gonna-be-alright commented 10 months ago

Unfortunately, it isn't very easy for me. ( insufficient knowledge of GoLang ) As I can see, it is necessary to rewrite a significant part of disk logic.

djonko commented 10 months ago

same problem from proxmox 8.1.3 thanks in advance

vilhelmprytz commented 10 months ago

@mleone87 Anything new regarding this?

ihatethecloud commented 10 months ago

Having the same problem but with TheGameProfi/proxmox. It seems defining file and volume fixes it but you need to know the id beforehand....

  disk {
    size    = "16G"
    type    = "scsi"
    storage = "vg_nvme"
    ssd     = 1
    file    = "vm-108-disk-0"
    volume  = "vg_nvme:vm-108-disk-0"
  }

Not sure how to go around this.

andrei-matei commented 10 months ago

Having the same issue myself. For now, the provider seems to be unusable.

hestiahacker commented 10 months ago

I was able to duplicate this issue in Proxmox 7.4-17 as well. This issue doesn't seem to be related to the version of Proxmox.

I don't think the issues is where [https://github.com/Telmate/terraform-provider-proxmox/issues/887#issuecomment-1868043544] pointed to because that line hasn't changed in 3 years.

I tried to find the exact commit where this stopped working so we could get a better idea of where the problem was introduced. Unfortunately, there are a lot of commits where the code doesn't compile, which made it difficult to see where the issue was introduced. Here's the best I could do:

I spot checked the 24 commits in between and none of them compiled except a8675d3967710bab4ac08fad9dbc05eed3ae2c58, but that one failed because my VM name was var.fqdn and dots were not allowed in this version (which was fixed in the e51e787c3a792176a6401c2db1b15cb2a30449e0 commit, which is where I was able to reproduce this issue).

My two guesses for where the problem was introduced are:

I. commit 4a602733bbb5b767eeb79e1b27cf98665c904bb4, specifically the block starting on line 1066 which adds an additional disk. That change was merged into the default branch with https://github.com/Telmate/terraform-provider-proxmox/pull/732, or II. commit d02bb5dba45bdb8e65ed58fccaa885fd6432e6fa, specifically the code block starting line 2202 which transforms a list of qemu devices (which includes disk drives). This was merged into the default branch in https://github.com/Telmate/terraform-provider-proxmox/pull/866

Those guesses are listed in the order of where my gut tells me the issue lies. It's also possible that there was some change in some library that the provider uses, of which there are a ton of them and a lot of them were updated between when I could confirm this issue was not present and when I could reproduce this issue.

I was actually trying to investigate a issue https://github.com/Telmate/terraform-provider-proxmox/issues/704 when I came across this one, so I'm going to go back to trying to identifying the root cause of that issue. Hopefully my notes here will be helpful in someone getting to the bottom of this one before another release is cut.

riverar commented 10 months ago

Noticed my recent EFI changes were referenced here (#732). Happy to help debug as well.

riverar commented 10 months ago

Can someone provide steps to reproduce and/or proxmox_vm_qemu resource block? I cannot reproduce on Proxmox 8.0.4 / telmate master.

// before
disk {
  type    = "sata"
  size    = "60G"
  storage = "local-lvm"
  cache   = "writethrough"
  discard = "on"
}

// after
disk {
  type    = "sata"
  size    = "62G"
  storage = "local-lvm"
  cache   = "writethrough"
  discard = "on"
}
// plan
  # proxmox_vm_qemu.vm1011 will be updated in-place
  ~ resource "proxmox_vm_qemu" "vm1011" {
      ...
      ~ disk {
          ~ size               = "60G" -> "62G"
            # (27 unchanged attributes hidden)
        }

Target machine via Proxmox UI: image

hestiahacker commented 10 months ago

I am able to reproduce the issue with the small example below using a provider compiled from commit e51e787c3a792176a6401c2db1b15cb2a30449e0 (the latest as of right now):

# Define the usual provider things
terraform {
  required_providers {
    proxmox = {
      source = "registry.example.com/telmate/proxmox"
      version = ">=1.0.0"
      #source = "Telmate/proxmox"
      #version = "=2.9.11"
    }
  }
  required_version = ">= 0.14"
}

resource "proxmox_vm_qemu" "server" {
  name              = "test.example.com"
  target_node       = "ra"
  clone             = "debian-12"
  full_clone        = true
  os_type           = "cloud-init"
  cores             = 1
  sockets           = "1"
  cpu               = "host"
  memory            = 512
  scsihw            = "virtio-scsi-pci"
  bootdisk          = "virtio0"
  disk {
    size            = "20G"
    type            = "virtio"
    cache           = "writeback"
    storage         = "local-zfs"
  }
  network {
    model           = "virtio"
    bridge          = "vmbr0"
  }

  # Cloud Init Settings
  # Reference: https://pve.proxmox.com/wiki/Cloud-Init_Support
  ipconfig0 = "ip=192.168.22.222/22,gw=192.168.22.1"
  nameserver = "192.168.23.100 192.168.22.100"
  sshkeys = file("${path.root}/test.pub")
}

I am testing against Proxmox 7.4-17. That created test.example.com with two disks: disk-0 (unused), and disk-1 (virtio0). It attempts to boot but just boot loops because it can't find any bootable media.

issue887

riverar commented 10 months ago

Thanks! Can reproduce here now.

riverar commented 10 months ago

Looks like this has broken due to an upstream qemu disks overhaul (https://github.com/Telmate/proxmox-api-go/pull/255) as @everythings-gonna-be-alright suggested (tagging @Tinyblargon), which changed ConfigQemu.QemuDisks behavior. (I suspect the "deprecated" label is in error and it's really just gone/obsolete now.)

This may be expected churn for the master branch, so not faulting anyone here. We just need to do the work to bring the terraform provider back in alignment.

Tinyblargon commented 10 months ago

@riverar #794 was supposed to change this behavior when the functionality was changed in the upstream library. Due to some setbacks, this kept getting delayed.

riverar commented 10 months ago

@Tinyblargon Oh nice, I missed that PR. Thanks!

pescobar commented 10 months ago

We are also hitting this problem with proxmox 8.1.3 and provider TheGameProfi/proxmox version 2.9.15

pescobar commented 10 months ago

while debugging this issue I have realized that once I create a new qemu VM and I try terraform state show there is no disk section for the VM state and that's why the provider tries to add it again.

Weird that during the initial creation of the VM the disk is properly created in the right storage pool and with the right size defined in the terraform code but it's just not added to the terraform state

hestiahacker commented 10 months ago

I can confirm that the code that @Tinyblargon wrote does fix this problem. I tested using the same small example that I posted earlier this week.

I've submitted a merge request with Tinyblargon's changes that can be applied cleanly to the HEAD of the default branch here in this repo. I'm pretty sure I've resolved all the merge conflicts correctly, and I have tested it to make sure it fixes this issue, but if anyone else would be willing and able to give it a review, I'd appreciate having a second pair of eyes on this.

And if anyone wants to just check out the code, compile it and verify that it fixes the issue, the code can be found here: https://github.com/hestiahacker/terraform-provider-proxmox/tree/overhaul-qemu-disks Having someone else reproduce my results would be good for avoiding that "works on my box" problem. :slightly_smiling_face:

hestiahacker commented 10 months ago

I've also tested an updated terraform file which uses the new syntax for configuring multiple disks. This avoids a deprecation warning from being printed, which makes me happy. My updated terraform file is below.

# Define the usual provider things
terraform {
  required_providers {
    proxmox = {
      source = "registry.example.com/telmate/proxmox"
      version = ">=1.0.0"
      #source = "Telmate/proxmox"
      #version = "=2.9.11"
    }
  }
  required_version = ">= 0.14"
}

resource "proxmox_vm_qemu" "server" {
  name              = "test.example.com"
  target_node       = "ra"
  clone             = "debian-12"
  full_clone        = true
  os_type           = "cloud-init"
  cores             = 1
  sockets           = "1"
  cpu               = "host"
  memory            = 512
  scsihw            = "virtio-scsi-pci"
  bootdisk          = "virtio0"
  disks {
    virtio {
      virtio0 {
        disk {
          size            = 20
          cache           = "writeback"
          storage         = "local-zfs"
        }
      }
    }
  }
  network {
    model           = "virtio"
    bridge          = "vmbr0"
  }

  # Cloud Init Settings
  # Reference: https://pve.proxmox.com/wiki/Cloud-Init_Support
  ipconfig0 = "ip=192.168.22.222/22,gw=192.168.22.1"
  nameserver = "192.168.23.100 192.168.22.100"
  sshkeys = file("${path.root}/test.pub")
}
victormongi commented 10 months ago

@hestiahacker what should we do? are we have to wait until new version release?

hestiahacker commented 10 months ago

I compiled the latest code and have been using that and it seems to have fixed this issue and #704. If you need a solution now, I'd suggest you take this route.

There are instructions for compiling from source but if you are compiling it on a Debian based machine, it'd look something like this:

git clone https://github.com/hestiahacker/terraform-provider-proxmox
cd terraform-provider-proxmox
git checkout overhaul-qemu-disks
sudo apt install -y ansible make
ansible-galaxy install gantsign.ansible-role-golang
ansible-playbook go.yml
. /etc/profile.d/golang.sh
make

At that point the new provider should be in the ./bin directory. If you aren't compiling it on your deployment machine, copy the executable over to the deployer. You'll need to copy the executable into a particular directory on the deployer. Here are the commands from the aforementioned installation guide:

PLUGIN_ARCH=linux_amd64
mkdir -p ~/.terraform.d/plugins/registry.example.com/telmate/proxmox/1.0.0/${PLUGIN_ARCH}
cp bin/terraform-provider-proxmox ~/.terraform.d/plugins/registry.example.com/telmate/proxmox/1.0.0/${PLUGIN_ARCH}/

That source path of the cp command will change if you are compiling on a different machine, but that should be easy enough. The last step is to tell Terraform to use this new module. That means updating your terraform to look like this:

terraform {
  required_providers {
    proxmox = {
      source  = "telmate/proxmox"
      version = ">=1.0.0"
    }
  }
  required_version = ">= 0.14"
}

That should work, but if you ever need to recompile it, terraform will complain about the checksum having changed. To deal with that you can either manually remove the offending checksum from the .terraform.lock.hcl file with a text editor, or what I personally do is just delete that lock file and regenerate it like so:

rm .terraform.lock.hcl && terraform get -update && terraform init -upgrade && terraform version

Also, do realize that this is the latest code that hasn't even been merged into this repo yet, and a fair amount has changed. So there's some risk of bugs causing you problems. I'd suggest you test the code in your environment and with your configuration even more than you would a new, official release. I've tested it in my environment, but if you're using different features than me, you could hit some code path that has a bug that I didn't run into.

If you are in an environment that has a low risk tolerance and you can't test this out, I have two suggestions:

  1. Get a test environment! :scream:
  2. Wait for the official release :grin:
kw149 commented 10 months ago

@hestiahacker thanks for much for the instructions, I was able to follow them (which is saying something). I'm not a developer and have only started using proxmox / terraform in the last few weeks. I'm just "testing" this all out in my lab so I have nothing to loose if this all goes wrong.

previously, terraform would create a duplicate scsi disk as well as the virtio, further more each time you "apply" changes it would create another duplicate. For example making a network device change etc, would result in another additional disk.

Anyway moving on. I've followed your instructions, tested out, but I have a new error, which I've not seen before:

panic: interface conversion: interface {} is string, not float64

goroutine 52 [running]:
github.com/Telmate/proxmox-api-go/proxmox.NewConfigQemuFromApi(0xc0003e8718, 0xc9d509?)
        github.com/Telmate/proxmox-api-go@v0.0.0-20230319185744-e7cde7198cdf/proxmox/config_qemu.go:584 +0x4605
github.com/Telmate/terraform-provider-proxmox/proxmox.resourceVmQemuCreate(0xc0003be300, {0xb66f60?, 0xc0003d2e60})
        github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:972 +0x2c4d
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xdd7840?, {0xdd7840?, 0xc0002f6570?}, 0xd?, {0xb66f60?, 0xc0003d2e60?})
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:695 +0x178
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc0000e2ee0, {0xdd7840, 0xc0002f6570}, 0xc00037cc30, 0xc0003bf080, {0xb66f60, 0xc0003d2e60})
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:837 +0xa85
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc0002e1ef0, {0xdd7840?, 0xc0002f6450?}, 0xc0001ef810)
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/grpc_provider.go:1021 +0xe8d
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc0001a0500, {0xdd7840?, 0xc000525b60?}, 0xc00043fdc0)
        github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:818 +0x574
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0xc6bc20?, 0xc0001a0500}, {0xdd7840, 0xc000525b60}, 0xc00043fd50, 0x0)
        github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00019a1e0, {0xddb420, 0xc0003144e0}, 0xc000515680, 0xc0002edce0, 0x128f7a0, 0x0)
        google.golang.org/grpc@v1.53.0/server.go:1336 +0xd23
google.golang.org/grpc.(*Server).handleStream(0xc00019a1e0, {0xddb420, 0xc0003144e0}, 0xc000515680, 0x0)
        google.golang.org/grpc@v1.53.0/server.go:1704 +0xa2f
google.golang.org/grpc.(*Server).serveStreams.func1.2()
        google.golang.org/grpc@v1.53.0/server.go:965 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
        google.golang.org/grpc@v1.53.0/server.go:963 +0x28a

Error: The terraform-provider-proxmox_v2.9.14 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

I've not changed any of the .tf files so either I have a miss-configuration or as you mentioned somewhere, there is potential for errors. thanks again for your efforts.

kw149 commented 10 months ago

Okay Ignore the above, I made a couple of mistakes, I'll post here incase others do the same thing.

I forgot to update my providers file

 proxmox = {
      source = "registry.example.com/telmate/proxmox"
      version = ">=1.0.0"
    }

I also needed to update my terraform file, so I changed

#  disk {
#    storage = "toaster-vm"
#    type = "scsi"
#    size = "12G"
#  }

disks {
    virtio{
      virtio0 {
        disk {
        size = 12
        storage = "toaster-vm"
        }
      }
    }
  }

I can confirm working now. I have a LCX container, Rocky8 and 9 and Ubuntu VM's all working within the same plan

For completeness I did the following

I had to run the compile as root, I kept getting ansible/golang errors (built using a LCX container so might have something to do with it) I copied the complied plug-in from /tmp/{compile directory} to my terraform directory (not my home dir)

Change the provider file and then follow instructions to fix the lock then modify the .tf file(s) for the disk declaration.

However when I changed the plan (made a modification to the description) It made some modifications

~ disks {
          - ide {
            }
          ~ virtio {
              ~ virtio0 {
                  ~ disk {
                        id                   = 0
                      - replicate            = true -> null
                        # (18 unchanged attributes hidden)
                    }
                }
            }
        }

The disk is not duplicated but has a replication flag ?

Screenshot 2024-01-06 at 11 01 48 am

Tchoupinax commented 10 months ago

Hey! I followed the issue as I'm also impacted since I upgraded Proxmox to v8.1.3. Following instructions to build the latest version of the provider, I can now start a VM without error. However, I can see the configuration is not the same as previously and expected. It seems the "cloud init" disk is not mounted.

Before: image

After: image

What is wrong here:

Tinyblargon commented 10 months ago

@Tchoupinax I've ran a few tests on my end could you try the version in #892 as this has the latest patches.

Tchoupinax commented 10 months ago

Hey @Tinyblargon, I tested your branch and it fixes my previous three issues. I wrote a comment here.

Thank you a lot for your work!

luispabon commented 10 months ago

Would these fixes also affect https://github.com/Telmate/terraform-provider-proxmox/issues/460 ? It's worked around by setting iothreads on your disks to 0.

pescobar commented 10 months ago

@TheGameProfi do you plan to publish a new version in your repository including this fix?

TheGameProfi commented 10 months ago

@TheGameProfi do you plan to publish a new version in your repository including this fix?

I will try to look into it today or tomorrow.

TheGameProfi commented 10 months ago

Sorry, only got now time to check it out.

Tested the newest changes and it worked, and didn't see any Errors. Released the new fixes inside my Repo.

Thanks Hestia & mleone87 & Tinyblargon for the fix :)

pescobar commented 10 months ago

thanks @TheGameProfi for publishing a release to terraform registry including this fix. Much appreciated!

opentokix commented 9 months ago

thanks @TheGameProfi for publishing a release to terraform registry including this fix. Much appreciated!

But there is no new version on the hashicorp registry? Or is it published under some other name than Telmate?

TheGameProfi commented 9 months ago

thanks @TheGameProfi for publishing a release to terraform registry including this fix. Much appreciated!

But there is no new version on the hashicorp registry? Or is it published under some other name than Telmate?

It is published as a fork under my own name, thegameprofi/proxmox There are multiple version with unreleased changes from this repo. But also, at least the newest version, has problems mounting a cloud-init drive

901

den-patrakeev commented 9 months ago

Hi! Check work RC v3.0.1-rc1 on Terraform 1.6.6 / 1.7.1 with ProxMox 8.0.4 / 8.1.3. The error is gone. Everything works well. Thanks to all!

hestiahacker commented 9 months ago

I've also verified that v3.0.1-rc1 is able to deploy the small example without any problems.

Thank you all for the testing, fixing, and releasing. :slightly_smiling_face:

devZer0 commented 9 months ago

i still have this problem with terraform v1.7.3 plugin 3.0.1-rc1 and pve 8.1.4, on clone from a template i still get 2 disks and the vm won't boot, even with the new disks syntax.

what can i do to avoid that 2 disks being created ?

i'm new to terraform and it's totally frustrating.

Tinyblargon commented 9 months ago

@devZer0, could you create a new issue with a screenshot of the hardware of your template, your terraform config, and a screenshot of the hardware of the cloned vm?

devZer0 commented 9 months ago

thank you. i have tried further and by chance i found, that when setting the disk size in terraform file to exactly match the size of the disk in vm template, then it won't happen and the disk isn't getting duplicated.

weird.

kerem-ak1 commented 8 months ago

Having the same problem but with TheGameProfi/proxmox. It seems defining file and volume fixes it but you need to know the id beforehand....

  disk {
    size    = "16G"
    type    = "scsi"
    storage = "vg_nvme"
    ssd     = 1
    file    = "vm-108-disk-0"
    volume  = "vg_nvme:vm-108-disk-0"
  }

Not sure how to go around this.

@ihatethecloud I wanted to try this workound but its failing at parameter verification. I'm using zfs though, not sure is it related or not.

latest version of TheGameProfi/proxmox proxmox version 8.1.4

file = "vm-137-disk-0" volume = "local-zfs:vm-137-disk-0

am i doing something wrong ?

ihatethecloud commented 8 months ago

Having the same problem but with TheGameProfi/proxmox. It seems defining file and volume fixes it but you need to know the id beforehand....

  disk {
    size    = "16G"
    type    = "scsi"
    storage = "vg_nvme"
    ssd     = 1
    file    = "vm-108-disk-0"
    volume  = "vg_nvme:vm-108-disk-0"
  }

Not sure how to go around this.

@ihatethecloud I wanted to try this workound but its failing at parameter verification. I'm using zfs though, not sure is it related or not.

latest version of TheGameProfi/proxmox proxmox version 8.1.4

file = "vm-137-disk-0" volume = "local-zfs:vm-137-disk-0

am i doing something wrong ?

Don’t use TheGameProfi/proxmox. Build this repo if it has not been pushed to the registry yet.

kerem-ak1 commented 8 months ago

Having the same problem but with TheGameProfi/proxmox. It seems defining file and volume fixes it but you need to know the id beforehand....

  disk {
    size    = "16G"
    type    = "scsi"
    storage = "vg_nvme"
    ssd     = 1
    file    = "vm-108-disk-0"
    volume  = "vg_nvme:vm-108-disk-0"
  }

Not sure how to go around this.

@ihatethecloud I wanted to try this workound but its failing at parameter verification. I'm using zfs though, not sure is it related or not. latest version of TheGameProfi/proxmox proxmox version 8.1.4 file = "vm-137-disk-0" volume = "local-zfs:vm-137-disk-0 am i doing something wrong ?

Don’t use TheGameProfi/proxmox. Build this repo if it has not been pushed to the registry yet.

version = "3.0.1-rc1" is out from telmate provider.

new version comes with different disk schema, I had to update my disk schema in tf file, now it works. its still has some glitches regarding disk type/controller though. anway thx to everyone !

ps:if someone needs a running example with clone_ _from_template+cloudinit+lvm/zfs can ping me.

JGHLab commented 8 months ago

Having the same problem but with TheGameProfi/proxmox. It seems defining file and volume fixes it but you need to know the id beforehand....

  disk {
    size    = "16G"
    type    = "scsi"
    storage = "vg_nvme"
    ssd     = 1
    file    = "vm-108-disk-0"
    volume  = "vg_nvme:vm-108-disk-0"
  }

Not sure how to go around this.

@ihatethecloud I wanted to try this workound but its failing at parameter verification. I'm using zfs though, not sure is it related or not. latest version of TheGameProfi/proxmox proxmox version 8.1.4 file = "vm-137-disk-0" volume = "local-zfs:vm-137-disk-0 am i doing something wrong ?

Don’t use TheGameProfi/proxmox. Build this repo if it has not been pushed to the registry yet.

version = "3.0.1-rc1" is out from telmate provider.

new version comes with different disk schema, I had to update my disk schema in tf file, now it works. its still has some glitches regarding disk type/controller though. anway thx to everyone !

ps:if someone needs a running example with clone_ _from_template+cloudinit+lvm/zfs can ping me.

Could you give me working example? I can't get mine to work as I get an unbootable disk error and the hardware settings are incorrect.

kerem-ak1 commented 8 months ago

https://pastebin.com/8RJnNYUK >> you can use this one for proxmox https://pastebin.com/CPMEcxx7 >> if you need a cloud init template @JGHLab

github-actions[bot] commented 6 months ago

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

devZer0 commented 6 months ago

not stale

github-actions[bot] commented 4 months ago

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

sebdanielsson commented 4 months ago

/keep open