Open eickler opened 3 years ago
Workaround is probably to remove the previous kubeconfig entry after terraform destroy
. I will try it out next time and add a comment to the cleanup procedure.
@eickler, @ankitm123 I have exactly the same problem, shall I delete the content of ~/.kube folder after terraform destroy?
@ahmetcetin Yes, that worked for me. Sorry for lack of updates.
The latest version should fix that issue - the issue was that the helm provider was using outdated credentials. It should work now in the latest version, can someone try and confirm?
Summary
When creating a new EKS cluster from scratch with terraform, kuberhealthy is installed with the cluster URL from the previous run of terraform.
Steps to reproduce the behavior
Expected behavior
Cluster is created and kuberhealthy is installed.
Actual behavior
where
<old url>
is the cluster URL of the previous run. Note that kubeconfig is correctly updated with the new cluster when terraform aborts with the error message.Maybe there is a missing dependency or race condition. The terraform log shows first
and then much later
Terraform version
The output of
terraform version
is:Module version
1.11.0
Operating system
MacOS