Telmate / terraform-provider-proxmox

Terraform provider plugin for proxmox
MIT License
2.2k stars 532 forks source link

Error: The terraform-provider-proxmox_v2.9.14 plugin crashed! (Proxmox 8.0.4 latest update) #863

Closed AlexKastrytski closed 5 months ago

AlexKastrytski commented 1 year ago

Stack trace from the terraform-provider-proxmox_v2.9.14 plugin:

panic: interface conversion: interface {} is string, not float64

goroutine 45 [running]: github.com/Telmate/proxmox-api-go/proxmox.NewConfigQemuFromApi(0xc000518718, 0xc9d509?) github.com/Telmate/proxmox-api-go@v0.0.0-20230319185744-e7cde7198cdf/proxmox/config_qemu.go:584 +0x4605 github.com/Telmate/terraform-provider-proxmox/proxmox.resourceVmQemuCreate(0xc0002e2c00, {0xb66f60?, 0xc00014bcc0}) github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:972 +0x2c4d github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(Resource).create(0xdd7840?, {0xdd7840?, 0xc0003eaf00?}, 0xd?, {0xb66f60?, 0xc00014bcc0?}) github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:695 +0x178 github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(Resource).Apply(0xc000208ee0, {0xdd7840, 0xc0003eaf00}, 0xc0004e0f70, 0xc0002e2a80, {0xb66f60, 0xc00014bcc0}) github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:837 +0xa85 github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(GRPCProviderServer).ApplyResourceChange(0xc00045fe60, {0xdd7840?, 0xc0003eade0?}, 0xc000238640) github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/grpc_provider.go:1021 +0xe8d github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(server).ApplyResourceChange(0xc000492000, {0xdd7840?, 0xc0003ea3c0?}, 0xc0002363f0) github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:818 +0x574 github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0xc6bc20?, 0xc000492000}, {0xdd7840, 0xc0003ea3c0}, 0xc000236070, 0x0) github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170 google.golang.org/grpc.(Server).processUnaryRPC(0xc0002d01e0, {0xddb420, 0xc0001029c0}, 0xc0005da480, 0xc000477170, 0x128f7a0, 0x0) google.golang.org/grpc@v1.53.0/server.go:1336 +0xd23 google.golang.org/grpc.(Server).handleStream(0xc0002d01e0, {0xddb420, 0xc0001029c0}, 0xc0005da480, 0x0) google.golang.org/grpc@v1.53.0/server.go:1704 +0xa2f google.golang.org/grpc.(Server).serveStreams.func1.2() google.golang.org/grpc@v1.53.0/server.go:965 +0x98 created by google.golang.org/grpc.(Server).serveStreams.func1 google.golang.org/grpc@v1.53.0/server.go:963 +0x28a

This is always indicative of a bug within the plugin. It would be immensely helpful if you could report the crash with the plugin's maintainers so that it can be fixed. The output above should help diagnose the issue.

lazywebm commented 11 months ago

I've now done the bulk of the work for migrating from Telmate to bpg well. Already able to deploy VMs again. The full deployment process only takes 12 - 15s with bpg, used to be up to minute with the Telmate provider, per VM.

Started working on it yesterday. The (mildly) tricky part up to this point was the data type switch from "clone" (VM name, string) in Telmate to "clone.vm_id" (VM ID, number) in bpg. Had to refactor some of my code regarding automatic template detection, which uses the corresponding output variables to choose the latest "correct" template on the target node.

I've just wanted to deploy some infrastructure earlier this week like most people here, but better to rip off the band aid now switch to a supported provider for Proxmox. Reminds me of [1]. Such is work in IT.

@Telmate - thank you for all of your contributions.

[1] https://www.youtube.com/watch?v=AbSehcT19u0

fulljackz commented 11 months ago

I've now done the bulk of the work for migrating from Telmate to bpg well. Already able to deploy VMs again. The full deployment process only takes 12 - 15s with bpg, used to be up to minute with the Telmate provider, per VM.

@Telmate - thank you for all of your contributions.

[1] https://www.youtube.com/watch?v=AbSehcT19u0

Nothing to do with the subject but : are you sure you are cloning the full disk or a linked one ? I was surprised that with bpg it was fast too but it did not work as expected because of the boot loop, I had to do a full clone instead

M0NsTeRRR commented 11 months ago

What you asking is a feature request but here is the trick to continue using VM name instead of hardcoded id. I've only one template but you can add tags to do more filtering or process every VM found to get the right id :)

data "proxmox_virtual_environment_vms" "template" {
  node_name = var.target_node
  tags      = ["template", "ubuntu"]
}

resource "proxmox_virtual_environment_vm" "vm" {
  clone {
      vm_id = data.proxmox_virtual_environment_vms.template.vms[0].vm_id
  }
}
lazywebm commented 11 months ago

I've now done the bulk of the work for migrating from Telmate to bpg well. Already able to deploy VMs again. The full deployment process only takes 12 - 15s with bpg, used to be up to minute with the Telmate provider, per VM. @Telmate - thank you for all of your contributions. [1] https://www.youtube.com/watch?v=AbSehcT19u0

Nothing to do with the subject but : are you sure you are cloning the full disk or a linked one ? I was surprised that with bpg it was fast too but it did not work as expected because of the boot loop, I had to do a full clone instead

Hi @fulljackz, yes it's a full clone off of the template, including the cloud-init disk.

lazywebm commented 11 months ago

What you asking is a feature request but here is the trick to continue using VM name instead of hardcoded id. I've only one template but you can add tags to do more filtering or process every VM found to get the right id :)

data "proxmox_virtual_environment_vms" "template" {
  node_name = var.target_node
  tags      = ["template", "ubuntu"]
}

resource "proxmox_virtual_environment_vm" "vm" {
  clone {
      vm_id = data.proxmox_virtual_environment_vms.template.vms[0].vm_id
  }
}

I see. My approach is different. I have two "groups" of templates (bullseye, bookworm) and want to make sure I'm always using the latest template based on the name, which contains the creation date, like so:

latest-bookworm-template = "packer-bookworm-template-20231122-1728"
latest-bullseye-template = "packer-bullseye-template-20231122-1728"
latest-bookworm-template-id = "101"
latest-bullseye-template-id = "102"

Images are built from a Jenkins Packer pipeline.

The terraform data sources:

data "external" "latest_bookworm_template" {
  program = ["bash", "../../../scripts/proxmox_get_latest_template.sh"]

  query = {
    PROXMOX_URL      = var.proxmox_url
    PROXMOX_USERNAME = var.proxmox_username
    PROXMOX_PASSWORD = var.proxmox_password
    TEMPLATE_TYPE    = "bookworm"
  }
}

data "external" "latest_bullseye_template" {
  program = ["bash", "../../../scripts/proxmox_get_latest_template.sh"]

  query = {
    PROXMOX_URL      = var.proxmox_url
    PROXMOX_USERNAME = var.proxmox_username
    PROXMOX_PASSWORD = var.proxmox_password
    TEMPLATE_TYPE    = "bullseye"
  }
}

output "latest-bookworm-template" {
  value = try(data.external.latest_bookworm_template.result.LATEST_TEMPLATE, "")
}

output "latest-bullseye-template" {
  value = try(data.external.latest_bullseye_template.result.LATEST_TEMPLATE, "")
}

output "latest-bookworm-template-id" {
  value = try(data.external.latest_bookworm_template.result.LATEST_TEMPLATE_ID, "")
}

output "latest-bullseye-template-id" {
  value = try(data.external.latest_bullseye_template.result.LATEST_TEMPLATE_ID, "")
}

Scripts acquire the template VM ID (script for template name is similar, without 2nd to last line):

#!/usr/bin/env bash

set -x

# Debug script with: echo "{\"PROXMOX_URL\": \"https://proxmox-server:8006/api2/json\", \"PROXMOX_USERNAME\": \"???????\", \"PROXMOX_PASSWORD\": \"???????\", \"TEMPLATE_TYPE\": \"bookworm\"}" | bash -x scripts/proxmox_get_latest_template.sh

# Convert JSON passed to script to bash variables
eval "$(jq -r '@sh "PROXMOX_URL=\(.PROXMOX_URL) PROXMOX_USERNAME=\(.PROXMOX_USERNAME) PROXMOX_PASSWORD=\(.PROXMOX_PASSWORD) TEMPLATE_TYPE=\(.TEMPLATE_TYPE)"')"
#echo "{\"PROXMOX_URL\": \"$PROXMOX_URL\", \"PROXMOX_USERNAME\": \"$PROXMOX_USERNAME\", \"PROXMOX_PASSWORD\": \"$PROXMOX_PASSWORD\", \"TEMPLATE_TYPE\": \"$TEMPLATE_TYPE\"}"

PROXMOX_TICKET=$(curl -s -k -d "username=${PROXMOX_USERNAME}&password=${PROXMOX_PASSWORD}" ${PROXMOX_URL}/access/ticket | jq -r '.data.ticket')
LATEST_TEMPLATE=$(curl -s -k -b "PVEAuthCookie=${PROXMOX_TICKET}" ${PROXMOX_URL}/cluster/resources | jq -r '.data | .[] | select(.template == 1) | select(.name|startswith("packer")).name' | grep ${TEMPLATE_TYPE} | sort | tail -1)
LATEST_TEMPLATE_ID=$(curl -s -k -b "PVEAuthCookie=${PROXMOX_TICKET}" ${PROXMOX_URL}/cluster/resources | jq -r '.data | .[] | select(.template == 1) | select(.name == '\"$LATEST_TEMPLATE\"') | .vmid')
echo "{\"LATEST_TEMPLATE\": \"${LATEST_TEMPLATE}\"", "\"LATEST_TEMPLATE_ID\": \"${LATEST_TEMPLATE_ID}\"}"

The corresponding variable definitons from within my individual VM deployment modules ("webserver-j01" in this case):

 vm_packer_image    = length(regexall("latest-bullseye-template", local.webserver-j.vm_config[0].vm_packer_image)) > 0 ? try(data.external.latest_bullseye_template.result.LATEST_TEMPLATE, local.webserver-j.vm_config[0].vm_packer_image) : length(regexall("latest-bookworm-template", local.webserver-j.vm_config[0].vm_packer_image)) > 0 ? try(data.external.latest_bookworm_template.result.LATEST_TEMPLATE, local.webserver-j.vm_config[0].vm_packer_image) : local.webserver-j.vm_config[0].vm_packer_image
vm_packer_image_id = length(regexall("latest-bullseye-template", local.webserver-j.vm_config[0].vm_packer_image)) > 0 ? try(data.external.latest_bullseye_template.result.LATEST_TEMPLATE_ID, local.webserver-j.vm_config[0].vm_packer_image_id) : length(regexall("latest-bookworm-template", local.webserver-j.vm_config[0].vm_packer_image)) > 0 ? try(data.external.latest_bookworm_template.result.LATEST_TEMPLATE_ID, local.webserver-j.vm_config[0].vm_packer_image_id) : local.webserver-j.vm_config[0].vm_packer_image

All of this makes it possible to create a new VM (or group of VMs) with just adding this part to either the global or local config array:

  clone {
    datastore_id = var.vm_config.vm_root_disk_storage
    node_name    = var.vm_config.vm_target_node
    vm_id        = var.vm_packer_image_id
  }

vm_packer_image = "latest-bookworm-template"

Bottom line, I basically let Terraform determine which template to use. Only need to specify if it's going to be Debian 11 or Debian 12.

M0NsTeRRR commented 11 months ago

Ah yeah I see, you could avoid bash script with refactoring my example by getting all templates and sort them. I'm using packer too but i'm cleaning up template to get only one.

batonogov commented 11 months ago

Today I also encountered this problem. Proxmox 8.1.3.

Stack trace from the terraform-provider-proxmox_v2.9.14 plugin:

panic: interface conversion: interface {} is string, not float64

goroutine 131 [running]:
github.com/Telmate/proxmox-api-go/proxmox.NewConfigQemuFromApi(0x14000903038, 0x14000903038?)
        github.com/Telmate/proxmox-api-go@v0.0.0-20230319185744-e7cde7198cdf/proxmox/config_qemu.go:584 +0x3b34
github.com/Telmate/terraform-provider-proxmox/proxmox._resourceVmQemuRead(0x1400063ac00, {0x10483dee0?, 0x1400036d090})
        github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1475 +0x324
github.com/Telmate/terraform-provider-proxmox/proxmox.resourceVmQemuRead(0x0?, {0x10483dee0?, 0x1400036d090?})
        github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1446 +0x24
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0x10497a9a0?, {0x10497a9a0?, 0x14000015ec0?}, 0xd?, {0x10483dee0?, 0x1400036d090?})
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:712 +0x134
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).RefreshWithoutUpgrade(0x140002ba7e0, {0x10497a9a0, 0x14000015ec0}, 0x1400065e270, {0x10483dee0, 0x1400036d090})
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/resource.go:1015 +0x468
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadResource(0x14000426b88, {0x10497a9a0?, 0x14000015da0?}, 0x1400036b480)
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/schema/grpc_provider.go:613 +0x400
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadResource(0x140000a23c0, {0x10497a9a0?, 0x14000014120?}, 0x1400032e060)
        github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:748 +0x3e8
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadResource_Handler({0x104943a60?, 0x140000a23c0}, {0x10497a9a0, 0x14000014120}, 0x1400022c380, 0x0)
        github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:349 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0x1400043a000, {0x10497e540, 0x14000199ba0}, 0x14000546000, 0x1400043cc60, 0x104e1b830, 0x0)
        google.golang.org/grpc@v1.53.0/server.go:1336 +0xb7c
google.golang.org/grpc.(*Server).handleStream(0x1400043a000, {0x10497e540, 0x14000199ba0}, 0x14000546000, 0x0)
        google.golang.org/grpc@v1.53.0/server.go:1704 +0x82c
google.golang.org/grpc.(*Server).serveStreams.func1.2()
        google.golang.org/grpc@v1.53.0/server.go:965 +0x84
created by google.golang.org/grpc.(*Server).serveStreams.func1
        google.golang.org/grpc@v1.53.0/server.go:963 +0x290

Error: The terraform-provider-proxmox_v2.9.14 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
TheGameProfi commented 11 months ago

@batonogov The Provider is broken in newer versions of Proxmox, you would need to fix the error, switch to an other provider or you use a fixed version from loeken, frostyfab or me. This command descripes the Problem and Solution: But this provider has multiple problems that needs fixes

batonogov commented 11 months ago

@TheGameProfi Thank you, fixed version of provider working!

  required_providers {
  #   proxmox = {
  #     source  = "telmate/proxmox"
  #     version = ">= 2.9.14"
  #   }
    proxmox = {
      source  = "thegameprofi/proxmox"
      version = ">= 2.9.15"
    }
  }
R3-AnThRaX commented 11 months ago

I found the problem by debugging the provider. Some "memory" values in QemuConfig returned from proxmox API are sometimes float64, and sometimes string

I temporary and locally fixed the problem in the proxmox-api-go library, into config_qemu.go file, function NewConfigQemuFromApi by replacing this part of code:

  if _, isSet := vmConfig["memory"]; isSet {
      switch vmConfig["memory"].(type) {
  }

by

  if _, isSet := vmConfig["memory"]; isSet {
      switch vmConfig["memory"].(type) {
      case float64:
          memory = vmConfig["memory"].(float64)
      case string:
          memory2, err := strconv.ParseFloat(vmConfig["memory"].(string), 64)
          if err != nil {
              log.Fatal(err)
              return nil, err
          } else {
              memory = memory2
          }
      }
  }

could you please give explicit step by step instructions on how to do this? I apologize but I am not really a programmer!

TheGameProfi commented 11 months ago

@R3-AnThRaX If your not familiar with go programming, the best would be for now to chose a diffrent provider either from myself or loken. We both fixed the problem in our own Version. Or you change to the provider from bpg.

But if you want to do it yourself, you need to change the code in the "proxmox-api-go", there is a file called config_qemu and there on line 520 you need to change it. But then you need to download the provider locally to change to your editied api, in go.mod file. Like that:

require "github.com/userName/otherModule" v0.0.0
replace "github.com/userName/otherModule" v0.0.0 => "local physical path to the otherModule"

Or you need to upload both to github and then publish it to terraform

R3-AnThRaX commented 11 months ago

@R3-AnThRaX If your not familiar with go programming, the best would be for now to chose a diffrent provider either from myself or loken. We both fixed the problem in our own Version. Or you change to the provider from bpg.

But if you want to do it yourself, you need to change the code in the "proxmox-api-go", there is a file called config_qemu and there on line 520 you need to change it. But then you need to download the provider locally to change to your editied api, in go.mod file. Like that:

require "github.com/userName/otherModule" v0.0.0
replace "github.com/userName/otherModule" v0.0.0 => "local physical path to the otherModule"

Or you need to upload both to github and then publish it to terraform

Awsome! Thank you for the reply!

vinny147-BIG commented 11 months ago

@R3-AnThRaX If your not familiar with go programming, the best would be for now to chose a diffrent provider either from myself or loken. We both fixed the problem in our own Version. Or you change to the provider from bpg. But if you want to do it yourself, you need to change the code in the "proxmox-api-go", there is a file called config_qemu and there on line 520 you need to change it. But then you need to download the provider locally to change to your editied api, in go.mod file. Like that:

require "github.com/userName/otherModule" v0.0.0
replace "github.com/userName/otherModule" v0.0.0 => "local physical path to the otherModule"

Or you need to upload both to github and then publish it to terraform

Awsome! Thank you for the reply!

Feedback on your instructions. The snippet below needs SDN permissions now to create VMs pveum role add TerraformProv -privs "Datastore.AllocateSpace Datastore.Audit Pool.Allocate Sys.Audit Sys.Console Sys.Modify VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Migrate VM.Monitor VM.PowerMgmt"

frostyfab commented 11 months ago

I just want to leave feedback here that @mleone87 seems to be actively maintaining this provider and we made the false assumption that its dead!

My fix for this bug was already merged into the related project.

glhf everyone

pescobar commented 11 months ago

@mleone87 would you mind giving an update about if you plan to publish a new release including the fix?

mleone87 commented 11 months ago

@pescobar my opinion on this is that it's a bit early to release a new version, a lot has been merged and something is still missing regarding this, I would make a generally working release this time.

pescobar commented 11 months ago

thanks @mleone87 for the update!

ben29 commented 11 months ago

@mleone87 any kind of ETA?

kuzmik commented 11 months ago

seconded! need a fix

emdnaia commented 11 months ago

seconded! need a fix

https://github.com/Telmate/terraform-provider-proxmox/issues/863#issuecomment-1826285173 this solution works

MaartendeKruijf commented 11 months ago

@mleone87 please contact me to discuss how we can continue maintenance

hestiahacker commented 10 months ago

OK, I've dug into this and can provide a summary of the current state of this issue and concerns raised in this thread.

  1. The Telmate repos are maintained. I submitted a merge request and two days ago and it was merged in yesterday
  2. This issue is fixed in the latest code (commit 5e1e8bb9eced3c813e788d27168617f99dc3e8c0 at time of writing)
  3. This is not fixed in v2.9.14 (the latest release at time of writing)
  4. Compiling the latest code will serve as a workaround
  5. Switching to v2.9.15 of thegameprofi's fork as described here will also serve as a workaround. The code from fork seems to have been pulled into the Telmate repo but a 2.9.15 release has not been cut yet.

Thank you to everyone who contributed to this ticket, be it with comments, code, or forks. :grin:

Code references

For anyone who wants to double check my work, here are the code references.

Fixed in latest code

The code change was in this comment. We can see that is now in the proxmox-api-go repo. It was committed in commit 0d37a47d49a45a440a5ba272d18ff1cc56a8a142.

Not fixed in v2.9.14

Release v2.9.14 includes an older commit of the proxmox-api-go library. This commit does not have the fix.

MaartendeKruijf commented 10 months ago

@hestiahacker

  1. As far as I understand the only @mleone87 is able to merge but is NOT a maintainer per Github rights.
  2. Nice addition
  3. I've tried to contact @mleone87 and @Tinyblargon to create an official fork and add them as maintainer so far no response on that proposal.
  4. One could use different forks to allow for that see https://registry.terraform.io/providers/TheGameProfi/proxmox/latest or https://registry.terraform.io/providers/MaartendeKruijf/proxmox/latest
  5. As of now the code still does not work for the vm disks which makes it hardly usable.

For me the question remains as this repo is abandoned by Telmate what we should do to properly maintain this. What doe you think about that? @hestiahacker

hestiahacker commented 10 months ago

Oh, I have many thoughts about that.

Concerns

I'm not sure what the impact is of Microsoft not considering mleone87 the repo owner is. Does that mean they can't cut releases? Can't add other people with committer/maintainer/merge access?

Only having one person with merge authority seems risky. If they are busy, things come to a halt, and putting that pressure on one person to constantly be watching a repo is stressful and can easily lead to burnout.

Resistance to forking

I feel like most users will be abandoned if we move development to another fork. We don't have a way that I know of to reach out to everyone and inform them of the new fork, so I expect most people will just never be aware it exists. I also don't want to divide developer efforts. That's why I'd like to resist moving to another repo if possible.

Having said that, it seems like there are only about a half dozen of us working on the code, so hopefully we can all agree to focus on a single repo.

Reasons I want to have a new fork

If the community abandons this repo, it'll lead to the provider eventually breaking, which has already happened to many people. If we make an issue that makes it clear to everyone who looks at the list issues that there's a new repo, and maybe add something to the top of the README, that'd go a long ways to alleviating my concerns about abandoning people. It's not ideal, but it's the best we can do. I came here to report and issue when things broke, so presumably many (most?) people would land here at some point as well.

As for dividing efforts, we can just monitor the other repo(s) and export and apply patches. It's more work, but it's not like we would be missing out on other people's contributions (unless/until the forks diverge significantly).

I'd be strongly in favor of moving to an open source platform like GitLab for a new fork. I'd be willing to put in the effort of setting up CI/CD over there and take care of any other the GitLab specific things. I realize moving to GitLab would mean an extra account for some people, but I think it'd well worth it.

Contributions I can offer

If we do fork it, I'd be willing to help with testing, responding to tickets, rebasing merge requests/resolving conflicts and so forth. I'm also willing to contribute code, but I don't actually know how to program in GoLang. I'm a hacker, so I can make it work, but until I learn the language, it's probably best for me to stick to patching bugs, not writing new code or making design decisions.

I'm still willing to contribute to a fork on GitHub, but I confess that I will be less enthusiastic about it.

TheGameProfi commented 10 months ago

I also don't have many experience with GoLang, but i will gladly help as much i can. But I'm not a fan of switching to GitLab, since many people don't know it and are mostly on GitHub.

Even if switching to another Repo isn't this good, i strongly think this is the only way to combine the effords of multiple Coders. If the Repo is getting forked to a new one, it would probably be best to create an Organization/Team to have multiple Users have access and of diffrent kinds.

Tinyblargon commented 10 months ago

@MaartendeKruijf

Oh, I have many thoughts about that.

Concerns

There are 2 concerns about the longevity of this project on my mind.

My main concern is that if something happens to mleone87 or his account this project and proxmox-api-go are dead in the water.

My second concern is that now Telmate is Viapath (their site redirects me there) we should assume that the org in which this repository resides is in full control of Viapath. We have never had contact with them (yes I have mailed info@telmate.com in the past about these projects, without response) if they wanted to get rid of the org for whatever reason this project would also be dead in the water.

Resistance to forking

When we fork most users will feel abandoned. The only way we can contact them is by putting deprecation warnings everywhere as a final commit (this would break things). If we do this we should only do it after we have successfully forked for at least 6 months. The other thing we can do is put information in the README redirecting users to the new projects. Preferably we would transfer the repository to the new organization as this would create redirects on github, sadly this isn't an option.

Reasons I want to have a new fork

Personally id prefer if instead of forking the repository we would re-upload it with it's whole history. The reason for this is that commits made in forks don't count in your profile, and personally showing to employers that i contribute to FOSS was a big reason for starting to help with this project.

With regards to moving. Currently most of the code in proxmox-api-go was developed by me. With that said my first priority is the longevity of this project, will that be in the Telmate organization or another. So I'll move if the others are willing to as well.

Regarding the move to GitLab Preferably we stay on Github. Moving to GitLab would make it slightly more difficult for people to find the new project. Also this would mean we would have to redo our CI workflow, which would lead to extra work. Trust me there is more than enough work to do in both of these projects as it stands proxmox-api-go and terraform-provider-proxmox.

As for dividing efforts, we would still be in the same level of control of this repository as we have now, so we could just merge things back here. Sure it would probably make the commit history of this project awful but who cares, we wouldn't be doing development in the repository anymore.

Clarifications

This project is completely tied to proxmox-api-go so we should also fork this repository, as all logic for interacting with Proxmox resides in there. The other developers of proxmox-api-go should also be contacted. This would maybe add half a dozen developers.

P.S. maybe a separate issue for this discussion?

@hestiahacker hope you don't mind me stealing your format :)

vftaylor commented 10 months ago

I support you guys whatever your decision about maintaining/forking this project is and I thank you for it. However, I want to add that I too looked at the bpg/proxmox provider (https://github.com/bpg/terraform-provider-proxmox) and it has MUCH more features and a better design than this project. Switching over my code to the new provider took about 30 minutes. It is a very viable (and better) alternative to this provider IMHO.

Clearly it's up to you, but out of curiousity is there any specific reason to duplicate effort and keep maintaining this one?

hestiahacker commented 10 months ago

Not supporting the stable release of Proxmox (currently 7.x) makes me think that that provider is going to break frequently unless I always run the latest release of Proxmox. Like any software, running the latest release is usually the opposite of running the most stable release. For context, Proxmox 8 was released 2023-06 and Proxmox will support version 7 until 2024-07, so that's a year where Proxmox has my back but bpg is telling me I'm on my own.

If the bpg repo committed to supporting all supported versions of Proxmox, I'd be much more interested in investing the time to convert all my terraform, do the testing, and attempt to switch over.

I realize that this is an ironic comment, given that this repo's provider frequently breaks, but the difference is that if I go to the effort of testing and patching to make sure the provider remains compatible, I'm not at all confident that the maintainers of the bpg repo will accept my merge requests or even help me debug problems. I've been told many times that projects don't want to deal with compatibility with "old" versions. I find that reasonable if "old" is defined as "unsupported", but that's not the case here.

hestiahacker commented 10 months ago

Also, requiring a more recent version of GoLang than is available in the latest stable version of Debian only reinforces this perception that they have bought into the "move fast and break things" mentality and don't care about stable releases.

If there's some alternative provider that would be a better fit for me, please do let me know. I haven't been able to find out. People don't seem to value long term support these days. :slightly_frowning_face:

MaartendeKruijf commented 10 months ago

@Tinyblargon

Concerns

Resistance to forking

I feel the resistance as for the community to find the fork would be beneficial. However as I forked the project and created a patch a few week ago 2.000+ downloads were don on that in about a weeks time. So I'm confident if we point people in the right way they can find it.

Reasons I want to have a new fork

Way forward

hestiahacker commented 10 months ago

I've created a proposal on the other thread about forking. If it looks good, give it a thumbs up. If not, comment with any changes you want to see so we can get to a concrete set of next steps.

den-patrakeev commented 9 months ago

Hi! Check work RC v3.0.1-rc1 on Terraform 1.6.6 / 1.7.1 with ProxMox 8.1.3. The error is gone. Everything works well. Thanks to all!

github-actions[bot] commented 7 months ago

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

hestiahacker commented 7 months ago

As previously mentioned, this is fixed in 3.0.1-rc1 and can be closed after an official 3.x release is cut and published.

github-actions[bot] commented 5 months ago

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

github-actions[bot] commented 5 months ago

This issue was closed because it has been inactive for 5 days since being marked as stale.