Closed AlexKastrytski closed 5 months ago
I've now done the bulk of the work for migrating from Telmate to bpg well. Already able to deploy VMs again. The full deployment process only takes 12 - 15s with bpg, used to be up to minute with the Telmate provider, per VM.
Started working on it yesterday. The (mildly) tricky part up to this point was the data type switch from "clone" (VM name, string) in Telmate to "clone.vm_id" (VM ID, number) in bpg. Had to refactor some of my code regarding automatic template detection, which uses the corresponding output variables to choose the latest "correct" template on the target node.
I've just wanted to deploy some infrastructure earlier this week like most people here, but better to rip off the band aid now switch to a supported provider for Proxmox. Reminds me of [1]. Such is work in IT.
@Telmate - thank you for all of your contributions.
I've now done the bulk of the work for migrating from Telmate to bpg well. Already able to deploy VMs again. The full deployment process only takes 12 - 15s with bpg, used to be up to minute with the Telmate provider, per VM.
@Telmate - thank you for all of your contributions.
Nothing to do with the subject but : are you sure you are cloning the full disk or a linked one ? I was surprised that with bpg it was fast too but it did not work as expected because of the boot loop, I had to do a full clone instead
What you asking is a feature request but here is the trick to continue using VM name instead of hardcoded id. I've only one template but you can add tags to do more filtering or process every VM found to get the right id :)
data "proxmox_virtual_environment_vms" "template" {
node_name = var.target_node
tags = ["template", "ubuntu"]
}
resource "proxmox_virtual_environment_vm" "vm" {
clone {
vm_id = data.proxmox_virtual_environment_vms.template.vms[0].vm_id
}
}
I've now done the bulk of the work for migrating from Telmate to bpg well. Already able to deploy VMs again. The full deployment process only takes 12 - 15s with bpg, used to be up to minute with the Telmate provider, per VM. @Telmate - thank you for all of your contributions. [1] https://www.youtube.com/watch?v=AbSehcT19u0
Nothing to do with the subject but : are you sure you are cloning the full disk or a linked one ? I was surprised that with bpg it was fast too but it did not work as expected because of the boot loop, I had to do a full clone instead
Hi @fulljackz, yes it's a full clone off of the template, including the cloud-init disk.
What you asking is a feature request but here is the trick to continue using VM name instead of hardcoded id. I've only one template but you can add tags to do more filtering or process every VM found to get the right id :)
data "proxmox_virtual_environment_vms" "template" { node_name = var.target_node tags = ["template", "ubuntu"] } resource "proxmox_virtual_environment_vm" "vm" { clone { vm_id = data.proxmox_virtual_environment_vms.template.vms[0].vm_id } }
I see. My approach is different. I have two "groups" of templates (bullseye, bookworm) and want to make sure I'm always using the latest template based on the name, which contains the creation date, like so:
latest-bookworm-template = "packer-bookworm-template-20231122-1728"
latest-bullseye-template = "packer-bullseye-template-20231122-1728"
latest-bookworm-template-id = "101"
latest-bullseye-template-id = "102"
Images are built from a Jenkins Packer pipeline.
The terraform data sources:
data "external" "latest_bookworm_template" {
program = ["bash", "../../../scripts/proxmox_get_latest_template.sh"]
query = {
PROXMOX_URL = var.proxmox_url
PROXMOX_USERNAME = var.proxmox_username
PROXMOX_PASSWORD = var.proxmox_password
TEMPLATE_TYPE = "bookworm"
}
}
data "external" "latest_bullseye_template" {
program = ["bash", "../../../scripts/proxmox_get_latest_template.sh"]
query = {
PROXMOX_URL = var.proxmox_url
PROXMOX_USERNAME = var.proxmox_username
PROXMOX_PASSWORD = var.proxmox_password
TEMPLATE_TYPE = "bullseye"
}
}
output "latest-bookworm-template" {
value = try(data.external.latest_bookworm_template.result.LATEST_TEMPLATE, "")
}
output "latest-bullseye-template" {
value = try(data.external.latest_bullseye_template.result.LATEST_TEMPLATE, "")
}
output "latest-bookworm-template-id" {
value = try(data.external.latest_bookworm_template.result.LATEST_TEMPLATE_ID, "")
}
output "latest-bullseye-template-id" {
value = try(data.external.latest_bullseye_template.result.LATEST_TEMPLATE_ID, "")
}
Scripts acquire the template VM ID (script for template name is similar, without 2nd to last line):
#!/usr/bin/env bash
set -x
# Debug script with: echo "{\"PROXMOX_URL\": \"https://proxmox-server:8006/api2/json\", \"PROXMOX_USERNAME\": \"???????\", \"PROXMOX_PASSWORD\": \"???????\", \"TEMPLATE_TYPE\": \"bookworm\"}" | bash -x scripts/proxmox_get_latest_template.sh
# Convert JSON passed to script to bash variables
eval "$(jq -r '@sh "PROXMOX_URL=\(.PROXMOX_URL) PROXMOX_USERNAME=\(.PROXMOX_USERNAME) PROXMOX_PASSWORD=\(.PROXMOX_PASSWORD) TEMPLATE_TYPE=\(.TEMPLATE_TYPE)"')"
#echo "{\"PROXMOX_URL\": \"$PROXMOX_URL\", \"PROXMOX_USERNAME\": \"$PROXMOX_USERNAME\", \"PROXMOX_PASSWORD\": \"$PROXMOX_PASSWORD\", \"TEMPLATE_TYPE\": \"$TEMPLATE_TYPE\"}"
PROXMOX_TICKET=$(curl -s -k -d "username=${PROXMOX_USERNAME}&password=${PROXMOX_PASSWORD}" ${PROXMOX_URL}/access/ticket | jq -r '.data.ticket')
LATEST_TEMPLATE=$(curl -s -k -b "PVEAuthCookie=${PROXMOX_TICKET}" ${PROXMOX_URL}/cluster/resources | jq -r '.data | .[] | select(.template == 1) | select(.name|startswith("packer")).name' | grep ${TEMPLATE_TYPE} | sort | tail -1)
LATEST_TEMPLATE_ID=$(curl -s -k -b "PVEAuthCookie=${PROXMOX_TICKET}" ${PROXMOX_URL}/cluster/resources | jq -r '.data | .[] | select(.template == 1) | select(.name == '\"$LATEST_TEMPLATE\"') | .vmid')
echo "{\"LATEST_TEMPLATE\": \"${LATEST_TEMPLATE}\"", "\"LATEST_TEMPLATE_ID\": \"${LATEST_TEMPLATE_ID}\"}"
The corresponding variable definitons from within my individual VM deployment modules ("webserver-j01" in this case):
vm_packer_image = length(regexall("latest-bullseye-template", local.webserver-j.vm_config[0].vm_packer_image)) > 0 ? try(data.external.latest_bullseye_template.result.LATEST_TEMPLATE, local.webserver-j.vm_config[0].vm_packer_image) : length(regexall("latest-bookworm-template", local.webserver-j.vm_config[0].vm_packer_image)) > 0 ? try(data.external.latest_bookworm_template.result.LATEST_TEMPLATE, local.webserver-j.vm_config[0].vm_packer_image) : local.webserver-j.vm_config[0].vm_packer_image
vm_packer_image_id = length(regexall("latest-bullseye-template", local.webserver-j.vm_config[0].vm_packer_image)) > 0 ? try(data.external.latest_bullseye_template.result.LATEST_TEMPLATE_ID, local.webserver-j.vm_config[0].vm_packer_image_id) : length(regexall("latest-bookworm-template", local.webserver-j.vm_config[0].vm_packer_image)) > 0 ? try(data.external.latest_bookworm_template.result.LATEST_TEMPLATE_ID, local.webserver-j.vm_config[0].vm_packer_image_id) : local.webserver-j.vm_config[0].vm_packer_image
All of this makes it possible to create a new VM (or group of VMs) with just adding this part to either the global or local config array:
clone {
datastore_id = var.vm_config.vm_root_disk_storage
node_name = var.vm_config.vm_target_node
vm_id = var.vm_packer_image_id
}
vm_packer_image = "latest-bookworm-template"
Bottom line, I basically let Terraform determine which template to use. Only need to specify if it's going to be Debian 11 or Debian 12.
Ah yeah I see, you could avoid bash script with refactoring my example by getting all templates and sort them. I'm using packer too but i'm cleaning up template to get only one.
Today I also encountered this problem. Proxmox 8.1.3.
Stack trace from the terraform-provider-proxmox_v2.9.14 plugin:
panic: interface conversion: interface {} is string, not float64
goroutine 131 [running]:
github.com/Telmate/proxmox-api-go/proxmox.NewConfigQemuFromApi(0x14000903038, 0x14000903038?)
github.com/Telmate/proxmox-api-go@v0.0.0-20230319185744-e7cde7198cdf/proxmox/config_qemu.go:584 +0x3b34
github.com/Telmate/terraform-provider-proxmox/proxmox._resourceVmQemuRead(0x1400063ac00, {0x10483dee0?, 0x1400036d090})
github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1475 +0x324
github.com/Telmate/terraform-provider-proxmox/proxmox.resourceVmQemuRead(0x0?, {0x10483dee0?, 0x1400036d090?})
github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1446 +0x24
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0x10497a9a0?, {0x10497a9a0?, 0x14000015ec0?}, 0xd?, {0x10483dee0?, 0x1400036d090?})
github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:712 +0x134
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).RefreshWithoutUpgrade(0x140002ba7e0, {0x10497a9a0, 0x14000015ec0}, 0x1400065e270, {0x10483dee0, 0x1400036d090})
github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:1015 +0x468
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadResource(0x14000426b88, {0x10497a9a0?, 0x14000015da0?}, 0x1400036b480)
github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/grpc_provider.go:613 +0x400
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadResource(0x140000a23c0, {0x10497a9a0?, 0x14000014120?}, 0x1400032e060)
github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:748 +0x3e8
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadResource_Handler({0x104943a60?, 0x140000a23c0}, {0x10497a9a0, 0x14000014120}, 0x1400022c380, 0x0)
github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:349 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0x1400043a000, {0x10497e540, 0x14000199ba0}, 0x14000546000, 0x1400043cc60, 0x104e1b830, 0x0)
google.golang.org/grpc@v1.53.0/server.go:1336 +0xb7c
google.golang.org/grpc.(*Server).handleStream(0x1400043a000, {0x10497e540, 0x14000199ba0}, 0x14000546000, 0x0)
google.golang.org/grpc@v1.53.0/server.go:1704 +0x82c
google.golang.org/grpc.(*Server).serveStreams.func1.2()
google.golang.org/grpc@v1.53.0/server.go:965 +0x84
created by google.golang.org/grpc.(*Server).serveStreams.func1
google.golang.org/grpc@v1.53.0/server.go:963 +0x290
Error: The terraform-provider-proxmox_v2.9.14 plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
@batonogov The Provider is broken in newer versions of Proxmox, you would need to fix the error, switch to an other provider or you use a fixed version from loeken, frostyfab or me. This command descripes the Problem and Solution: But this provider has multiple problems that needs fixes
@TheGameProfi Thank you, fixed version of provider working!
required_providers {
# proxmox = {
# source = "telmate/proxmox"
# version = ">= 2.9.14"
# }
proxmox = {
source = "thegameprofi/proxmox"
version = ">= 2.9.15"
}
}
I found the problem by debugging the provider. Some "memory" values in QemuConfig returned from proxmox API are sometimes float64, and sometimes string
I temporary and locally fixed the problem in the proxmox-api-go library, into config_qemu.go file, function NewConfigQemuFromApi by replacing this part of code:
if _, isSet := vmConfig["memory"]; isSet { switch vmConfig["memory"].(type) { }
by
if _, isSet := vmConfig["memory"]; isSet { switch vmConfig["memory"].(type) { case float64: memory = vmConfig["memory"].(float64) case string: memory2, err := strconv.ParseFloat(vmConfig["memory"].(string), 64) if err != nil { log.Fatal(err) return nil, err } else { memory = memory2 } } }
could you please give explicit step by step instructions on how to do this? I apologize but I am not really a programmer!
@R3-AnThRaX If your not familiar with go programming, the best would be for now to chose a diffrent provider either from myself or loken. We both fixed the problem in our own Version. Or you change to the provider from bpg.
But if you want to do it yourself, you need to change the code in the "proxmox-api-go", there is a file called config_qemu and there on line 520 you need to change it. But then you need to download the provider locally to change to your editied api, in go.mod file. Like that:
require "github.com/userName/otherModule" v0.0.0
replace "github.com/userName/otherModule" v0.0.0 => "local physical path to the otherModule"
Or you need to upload both to github and then publish it to terraform
@R3-AnThRaX If your not familiar with go programming, the best would be for now to chose a diffrent provider either from myself or loken. We both fixed the problem in our own Version. Or you change to the provider from bpg.
But if you want to do it yourself, you need to change the code in the "proxmox-api-go", there is a file called config_qemu and there on line 520 you need to change it. But then you need to download the provider locally to change to your editied api, in go.mod file. Like that:
require "github.com/userName/otherModule" v0.0.0 replace "github.com/userName/otherModule" v0.0.0 => "local physical path to the otherModule"
Or you need to upload both to github and then publish it to terraform
Awsome! Thank you for the reply!
@R3-AnThRaX If your not familiar with go programming, the best would be for now to chose a diffrent provider either from myself or loken. We both fixed the problem in our own Version. Or you change to the provider from bpg. But if you want to do it yourself, you need to change the code in the "proxmox-api-go", there is a file called config_qemu and there on line 520 you need to change it. But then you need to download the provider locally to change to your editied api, in go.mod file. Like that:
require "github.com/userName/otherModule" v0.0.0 replace "github.com/userName/otherModule" v0.0.0 => "local physical path to the otherModule"
Or you need to upload both to github and then publish it to terraform
Awsome! Thank you for the reply!
Feedback on your instructions. The snippet below needs SDN permissions now to create VMs
pveum role add TerraformProv -privs "Datastore.AllocateSpace Datastore.Audit Pool.Allocate Sys.Audit Sys.Console Sys.Modify VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Migrate VM.Monitor VM.PowerMgmt"
@mleone87 would you mind giving an update about if you plan to publish a new release including the fix?
@pescobar my opinion on this is that it's a bit early to release a new version, a lot has been merged and something is still missing regarding this, I would make a generally working release this time.
thanks @mleone87 for the update!
@mleone87 any kind of ETA?
seconded! need a fix
seconded! need a fix
https://github.com/Telmate/terraform-provider-proxmox/issues/863#issuecomment-1826285173 this solution works
@mleone87 please contact me to discuss how we can continue maintenance
OK, I've dug into this and can provide a summary of the current state of this issue and concerns raised in this thread.
Thank you to everyone who contributed to this ticket, be it with comments, code, or forks. :grin:
For anyone who wants to double check my work, here are the code references.
The code change was in this comment. We can see that is now in the proxmox-api-go repo. It was committed in commit 0d37a47d49a45a440a5ba272d18ff1cc56a8a142.
Release v2.9.14 includes an older commit of the proxmox-api-go library. This commit does not have the fix.
@hestiahacker
For me the question remains as this repo is abandoned by Telmate what we should do to properly maintain this. What doe you think about that? @hestiahacker
Oh, I have many thoughts about that.
I'm not sure what the impact is of Microsoft not considering mleone87 the repo owner is. Does that mean they can't cut releases? Can't add other people with committer/maintainer/merge access?
Only having one person with merge authority seems risky. If they are busy, things come to a halt, and putting that pressure on one person to constantly be watching a repo is stressful and can easily lead to burnout.
I feel like most users will be abandoned if we move development to another fork. We don't have a way that I know of to reach out to everyone and inform them of the new fork, so I expect most people will just never be aware it exists. I also don't want to divide developer efforts. That's why I'd like to resist moving to another repo if possible.
Having said that, it seems like there are only about a half dozen of us working on the code, so hopefully we can all agree to focus on a single repo.
If the community abandons this repo, it'll lead to the provider eventually breaking, which has already happened to many people. If we make an issue that makes it clear to everyone who looks at the list issues that there's a new repo, and maybe add something to the top of the README, that'd go a long ways to alleviating my concerns about abandoning people. It's not ideal, but it's the best we can do. I came here to report and issue when things broke, so presumably many (most?) people would land here at some point as well.
As for dividing efforts, we can just monitor the other repo(s) and export and apply patches. It's more work, but it's not like we would be missing out on other people's contributions (unless/until the forks diverge significantly).
I'd be strongly in favor of moving to an open source platform like GitLab for a new fork. I'd be willing to put in the effort of setting up CI/CD over there and take care of any other the GitLab specific things. I realize moving to GitLab would mean an extra account for some people, but I think it'd well worth it.
If we do fork it, I'd be willing to help with testing, responding to tickets, rebasing merge requests/resolving conflicts and so forth. I'm also willing to contribute code, but I don't actually know how to program in GoLang. I'm a hacker, so I can make it work, but until I learn the language, it's probably best for me to stick to patching bugs, not writing new code or making design decisions.
I'm still willing to contribute to a fork on GitHub, but I confess that I will be less enthusiastic about it.
I also don't have many experience with GoLang, but i will gladly help as much i can. But I'm not a fan of switching to GitLab, since many people don't know it and are mostly on GitHub.
Even if switching to another Repo isn't this good, i strongly think this is the only way to combine the effords of multiple Coders. If the Repo is getting forked to a new one, it would probably be best to create an Organization/Team to have multiple Users have access and of diffrent kinds.
@MaartendeKruijf
Oh, I have many thoughts about that.
There are 2 concerns about the longevity of this project on my mind.
My main concern is that if something happens to mleone87 or his account this project and proxmox-api-go are dead in the water.
My second concern is that now Telmate is Viapath (their site redirects me there) we should assume that the org in which this repository resides is in full control of Viapath. We have never had contact with them (yes I have mailed info@telmate.com in the past about these projects, without response) if they wanted to get rid of the org for whatever reason this project would also be dead in the water.
When we fork most users will feel abandoned. The only way we can contact them is by putting deprecation warnings everywhere as a final commit (this would break things). If we do this we should only do it after we have successfully forked for at least 6 months. The other thing we can do is put information in the README redirecting users to the new projects. Preferably we would transfer the repository to the new organization as this would create redirects on github, sadly this isn't an option.
Personally id prefer if instead of forking the repository we would re-upload it with it's whole history. The reason for this is that commits made in forks don't count in your profile, and personally showing to employers that i contribute to FOSS was a big reason for starting to help with this project.
With regards to moving. Currently most of the code in proxmox-api-go was developed by me. With that said my first priority is the longevity of this project, will that be in the Telmate organization or another. So I'll move if the others are willing to as well.
Regarding the move to GitLab Preferably we stay on Github. Moving to GitLab would make it slightly more difficult for people to find the new project. Also this would mean we would have to redo our CI workflow, which would lead to extra work. Trust me there is more than enough work to do in both of these projects as it stands proxmox-api-go and terraform-provider-proxmox.
As for dividing efforts, we would still be in the same level of control of this repository as we have now, so we could just merge things back here. Sure it would probably make the commit history of this project awful but who cares, we wouldn't be doing development in the repository anymore.
This project is completely tied to proxmox-api-go so we should also fork this repository, as all logic for interacting with Proxmox resides in there. The other developers of proxmox-api-go should also be contacted. This would maybe add half a dozen developers.
P.S. maybe a separate issue for this discussion?
@hestiahacker hope you don't mind me stealing your format :)
I support you guys whatever your decision about maintaining/forking this project is and I thank you for it. However, I want to add that I too looked at the bpg/proxmox provider (https://github.com/bpg/terraform-provider-proxmox) and it has MUCH more features and a better design than this project. Switching over my code to the new provider took about 30 minutes. It is a very viable (and better) alternative to this provider IMHO.
Clearly it's up to you, but out of curiousity is there any specific reason to duplicate effort and keep maintaining this one?
Not supporting the stable release of Proxmox (currently 7.x) makes me think that that provider is going to break frequently unless I always run the latest release of Proxmox. Like any software, running the latest release is usually the opposite of running the most stable release. For context, Proxmox 8 was released 2023-06 and Proxmox will support version 7 until 2024-07, so that's a year where Proxmox has my back but bpg is telling me I'm on my own.
If the bpg repo committed to supporting all supported versions of Proxmox, I'd be much more interested in investing the time to convert all my terraform, do the testing, and attempt to switch over.
I realize that this is an ironic comment, given that this repo's provider frequently breaks, but the difference is that if I go to the effort of testing and patching to make sure the provider remains compatible, I'm not at all confident that the maintainers of the bpg repo will accept my merge requests or even help me debug problems. I've been told many times that projects don't want to deal with compatibility with "old" versions. I find that reasonable if "old" is defined as "unsupported", but that's not the case here.
Also, requiring a more recent version of GoLang than is available in the latest stable version of Debian only reinforces this perception that they have bought into the "move fast and break things" mentality and don't care about stable releases.
If there's some alternative provider that would be a better fit for me, please do let me know. I haven't been able to find out. People don't seem to value long term support these days. :slightly_frowning_face:
@Tinyblargon
I feel the resistance as for the community to find the fork would be beneficial. However as I forked the project and created a patch a few week ago 2.000+ downloads were don on that in about a weeks time. So I'm confident if we point people in the right way they can find it.
I've created a proposal on the other thread about forking. If it looks good, give it a thumbs up. If not, comment with any changes you want to see so we can get to a concrete set of next steps.
Hi! Check work RC v3.0.1-rc1 on Terraform 1.6.6 / 1.7.1 with ProxMox 8.1.3. The error is gone. Everything works well. Thanks to all!
This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs
As previously mentioned, this is fixed in 3.0.1-rc1 and can be closed after an official 3.x release is cut and published.
This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs
This issue was closed because it has been inactive for 5 days since being marked as stale.
Stack trace from the terraform-provider-proxmox_v2.9.14 plugin:
panic: interface conversion: interface {} is string, not float64
goroutine 45 [running]: github.com/Telmate/proxmox-api-go/proxmox.NewConfigQemuFromApi(0xc000518718, 0xc9d509?) github.com/Telmate/proxmox-api-go@v0.0.0-20230319185744-e7cde7198cdf/proxmox/config_qemu.go:584 +0x4605 github.com/Telmate/terraform-provider-proxmox/proxmox.resourceVmQemuCreate(0xc0002e2c00, {0xb66f60?, 0xc00014bcc0}) github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:972 +0x2c4d github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(Resource).create(0xdd7840?, {0xdd7840?, 0xc0003eaf00?}, 0xd?, {0xb66f60?, 0xc00014bcc0?}) github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:695 +0x178 github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(Resource).Apply(0xc000208ee0, {0xdd7840, 0xc0003eaf00}, 0xc0004e0f70, 0xc0002e2a80, {0xb66f60, 0xc00014bcc0}) github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:837 +0xa85 github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(GRPCProviderServer).ApplyResourceChange(0xc00045fe60, {0xdd7840?, 0xc0003eade0?}, 0xc000238640) github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/grpc_provider.go:1021 +0xe8d github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(server).ApplyResourceChange(0xc000492000, {0xdd7840?, 0xc0003ea3c0?}, 0xc0002363f0) github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:818 +0x574 github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0xc6bc20?, 0xc000492000}, {0xdd7840, 0xc0003ea3c0}, 0xc000236070, 0x0) github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170 google.golang.org/grpc.(Server).processUnaryRPC(0xc0002d01e0, {0xddb420, 0xc0001029c0}, 0xc0005da480, 0xc000477170, 0x128f7a0, 0x0) google.golang.org/grpc@v1.53.0/server.go:1336 +0xd23 google.golang.org/grpc.(Server).handleStream(0xc0002d01e0, {0xddb420, 0xc0001029c0}, 0xc0005da480, 0x0) google.golang.org/grpc@v1.53.0/server.go:1704 +0xa2f google.golang.org/grpc.(Server).serveStreams.func1.2() google.golang.org/grpc@v1.53.0/server.go:965 +0x98 created by google.golang.org/grpc.(Server).serveStreams.func1 google.golang.org/grpc@v1.53.0/server.go:963 +0x28a
This is always indicative of a bug within the plugin. It would be immensely helpful if you could report the crash with the plugin's maintainers so that it can be fixed. The output above should help diagnose the issue.