vultr / terraform-provider-vultr

Terraform Vultr provider
https://www.terraform.io/docs/providers/vultr/
Mozilla Public License 2.0
193 stars 65 forks source link

[BUG] - K8s node pool plan do not changed #282

Open baznikin opened 2 years ago

baznikin commented 2 years ago

Describe the bug I change node type (plan) provider shows reasonable plan, successfully applied it, but nothing happens. Nodes remains same.

Terraform will perform the following actions:

  # vultr_kubernetes.k8 will be updated in-place
  ~ resource "vultr_kubernetes" "k8" {
        id             = "84c02345-4b1a-47bc-b1d7-6a6552770a5e"
        # (10 unchanged attributes hidden)

      ~ node_pools {
            id            = "72781522-57fd-4d4f-8d38-5fef0b153c5f"
          ~ plan          = "vc2-2c-4gb" -> "vc2-4c-8gb"
            # (10 unchanged attributes hidden)
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Next run show same plan, no changes in web-interface either.

optik-aper commented 2 years ago

Thanks for the report. It looks like the intention here might be to replace the node pool since the API doesn't support changing the node pool type (documentation). I'll test that out and see if it's something we can support in the terraform provider at least.

optik-aper commented 2 years ago

After messing around with this a bit more, I think it's more complicated than I initially thought--especially for a cluster that has an existing workload. The general advice/best-practice is to spin up a new, larger-sized node pool, drain the workload on the old one, make sure the new node pool is working then remove the smaller, obsolete node pool. Doing all of that in the terraform provider isn't immediately possible so it's going to have to be a partially manual process.

I hope that's helpful.

Nonetheless, it would be good to give an error or something back to the user informing them of this limitation so I'll keep this open until that's in place.

baznikin commented 2 years ago

DigitalOcean provider in same situation mark cluster for recreation. It is sane and expectable behaviour. First thought was "wow, they spin up new cluster and migrate old one to it node by node!...". Simple recreation is OK too since there is manual loss-free procedure exists.

ср, 24 авг. 2022 г., 2:33 Michael Riley @.***>:

After messing around with this a bit more, I think it's more complicated than I initially thought--especially for a cluster that has an existing workload. The general advice/best-practice is to spin up a new, larger-sized node pool, drain the workload on the old one, make sure the new node pool is working then remove the smaller, obsolete node pool. Doing all of that in the terraform provider isn't immediately possible so it's going to have to be a partially manual process.

I hope that's helpful.

— Reply to this email directly, view it on GitHub https://github.com/vultr/terraform-provider-vultr/issues/282#issuecomment-1224957014, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHROZWFQUL67DCE47RAIBDV2VGUDANCNFSM57MHK2PA . You are receiving this because you authored the thread.Message ID: @.***>

baznikin commented 1 year ago

Step into this issue again! API really should return error so provider can show it to user (smth like "403 changing primary node pool not allowed")