Open bweston92 opened 2 years ago
Hi @bweston92,
It looks like there was an issue with the API while polling for cluster status post-create. You should be able to recover from this by importing the cluster:
terraform import digitalocean_kubernetes_cluster.<name> <cluster ID>
Hi, thanks for taking the time to reply.
Is there a way if it fails then the provider issues a delete and waits for it to delete?
Our apply
and destroy
calls doesn't have user intervention (scheduled jobs / test env pipeline) and the less bloat like that the better. We would have to add labels to the clusters and create something that could query the the clusters with them labels and import, not the ideal user experience.
Describe the bug
If the Kubernetes cluster fails during creation, further attempts to "apply" changes will fail as the cluster name already exists, but if you do a destroy it doesn't remove the cluster so you're with a zombie cluster.
Affected Resource(s)
Actual Behavior
During the creation of a Kubernetes cluster it fails for an unknown reason.
Subsequent attempt at applying the cluster gets a 422.
however on a destroy it doesn't remove the cluster.