Open LaurisJakobsons opened 2 years ago
This is related to: https://github.com/digitalocean/terraform-provider-digitalocean/issues/424 because terraform looses provider connection data when you are changing node pool size. The lifecycle will want to remove and create new cluster and though at this point data in kubernetes
provider retrieved from digitalocean_kubernetes_cluster
resource is unknown (because will be known after apply) in terraform run resulting in error.
Bug Report
Describe the bug
Whenever changes are made to the (default) node pool autoscaling parameters within the
digitalocean_kubernetes_cluster
resource, terraform loses all the information about resources that were created within the cluster. This breaks the terraform state to the point where everything needs to be destroyed & re-applied to fix it.Let's say I've created
digitalocean_kubernetes_cluster
resource and along with that, added severalkubernetes
resources within the same cluster using terraform. If I change the autoscaling parameters of the default node pool within thedigitalocean_kubernetes_cluster
resource, terraform loses all the information aboutkubernetes
resources that were created in the cluster and tries to apply them again, resulting in numerous resource "already exists" errors, because they are still present on the cluster, just terraform has lost all the information about them.(NOTE: Autoscaling changes are applied correctly)
Affected Resource(s)
Expected Behavior
Node pool autoscaling changes should be applied without causing terraform to lose information about other resources created within the cluster.
Actual Behavior
All the
kubernetes
resources created within the cluster are lost from terraform state.Steps to Reproduce
digitalocean_kubernetes_cluster
kubernetes
providerdigitalocean_kubernetes_cluster
node pool autoscaling parameters & apply changesTerraform Configuration Files
Additional context
Using
terraform
version 1.1.5,digitalocean
provider 2.17.1,kubernetes
provider 2.8.0