vultr / terraform-provider-vultr

Terraform Vultr provider
https://www.terraform.io/docs/providers/vultr/
Mozilla Public License 2.0
191 stars 65 forks source link

[Feature] - First VKE node pool should be deletable #227

Closed jtackaberry closed 2 years ago

jtackaberry commented 2 years ago

From the console, when I add a second node pool, I'm able to delete the first. However, when the cluster is created via Terraform, the first node pool is considered special and, near as I can tell, it can't be deleted.

The main use case is migrating the cluster to a new instance type. Normally you'd add the second node pool with the desired instance type, migrate the workloads over, then delete the original node pool. I don't see a way to accomplish this with Terraform, based on how the provider is designed. (Also, updating the instance type in situ is accepted (tf apply works), but doesn't actually do anything (tf apply subsequently still wants to change the plan). The API should reject this if it's not supported. In any case, the node pool shuffle described earlier should be supported.)

Ideally, the node_pools block in the vultr_kubernetes resource would be deprecated, and all node groups instead defined by vultr_kubernetes_node_pools. This puts all node pools on equal footing: cattle, not pets.

However the implication of this design is that one could have zero node pools. Personally I think that'd be great. It's intuitive, and FWIW EKS can do it. However I do appreciate that unlike EKS where you pay for the control plane so Amazon is still making money on a zero-node-group EKS cluster, that's not the case for VKE.

If this is a concern, I suppose the current API behavior is such that you can't delete the last node pool anyway, so terraform would simply fail if no node pools were defined, and that'd be a perfectly fine second prize.

ddymko commented 2 years ago

@jtackaberry

There are certain limitations that terraform has that lead to the design. Mostly around not being to nest map data that well. This is why node pools are broken up with one being required while others being separate resources (which was something I wanted to avoid but there is no great way to handle it).

That being said I'll look into how we can update the provider to not have a hard requirement on the main Kubernetes resource (but we will require a 1 node pool requirement for a VKE cluster).