Open deanrock opened 4 years ago
@deanrock I'm not 100% sure, but this looks very similar to a problem I deal with every week, and you could try a terraform refresh
rather than removing/adding to the state.
This also happens to me on a weekly basis. Running plan and apply on the cluster also fixes the issue for me. Note that the kube_config generated by doctl kubernetes cluster kubeconfig save "$CLUSTER_ID"
does not have this issue.
We've hit the same issue today.
Unfortunately, we can not recommend using interpolation to pass credentials from a digitalocean_kubernetes_cluster
resource to the Kubernetes provider. Generally the cluster resource should not be created in the same Terraform module where Kubernetes provider resources are also used. There are warnings against this approach in the docs for both the DigitalOcean and Kubernete's providers.
The most reliable way to configure the Kubernetes provider is to ensure that the cluster itself and the Kubernetes provider resources can be managed with separate
apply
operations. Data-sources can be used to convey values between the two stages as needed.
The root issue lies with the order in which Terraform itself evaluates the provider blocks vs. actual resources.
In case anyone finds this helpful: in our case the problem was a configuration drift between the cluster and our terraform state. It is strange and I can't be sure how it happened but it did. Once we fixed the drift problem by removing from the config the resources that were erroneously part of the config, everything worked smoothly, the token got updated automatically.
I've created DO cluster 10 days ago. Yesterday,
terraform plan
couldn't refresh state, since K8s API started returningUnauthorized
.expires_at
in the existing state was set to June 16th (it worked without issues earlier yesterday), so renewal was not triggered here: https://github.com/terraform-providers/terraform-provider-digitalocean/blob/fd9e7b8b8156599799c7b2e636f68a529929b3bf/digitalocean/resource_digitalocean_kubernetes_cluster.go#L294I'm fairly certain I didn't manually delete
doks
API token, so I'm not completely sure why it didn't work anymore.Terraform Version
Affected Resource(s)
Terraform Configuration Files
Expected Behavior
We could check if
kube_config
token is still valid, and force renew it otherwise.Actual Behavior
terraform plan
fails withError: Unauthorized
error.Important Factoids
Removing cluster from state, and importing it solves the problem.