Open rojopolis opened 4 years ago
Thanks for reporting! Just to clarify, would there ever be a concern with deleting the workspace and stranding the resources created by the provider (e.g., resources created in AWS no longer in state due to workspace deletion)? The current workflow does not delete the workspace because the finalizer is written to accommodate for this concern. I think we could potentially handle this by checking for whether or not there are resources in state but I wanted to get some thoughts on the approach.
Just to clarify, would there ever be a concern with deleting the workspace and stranding the resources created by the provider (e.g., resources created in AWS no longer in state due to workspace deletion)?
Yes, That seems like it would be a problem. In my trivial example that case isn't possible, but if resources are created they should be destroyed.
FYI this also happens when the Destroy Plan fails in Terraform Cloud. The workspace is literally undeletable and stuck.
Community Note
terraform-k8s & Kubernetes Version
AWS EKS 1.14 Terraform 0.12.24 Operator: hashicorp/terraform-k8s:0.1.1-alpha
Affected Resource(s)
Workspace
Terraform Configuration Files
Debug Output
Expected Behavior
kubectl delete workspace/buckets
Actual Behavior
kubectl hangs (forever?)
sync-workspace logs:
Steps to Reproduce
Important Factoids
My config was invalid because the module in use requires a list as an input variable #11 . It seems like the transition of failed->deleted should be valid.
References
11
0000