Closed chillitom closed 2 years ago
@chillitom Thanks for reporting this.
The cluster is actually destroyed only after the manifests are deleted because of the natural dependency between them. The problem here is that the deletion of manifests is asynchronous, and we currently do not wait for them to be completed before signaling CloudFormation that the resource has been deleted.
We already have an issue for this that we plan to address to address soon.
This issue has not received any attention in 1 year. If you want to keep this issue open, please leave a comment below and auto-close will be canceled.
When destroying a Stack with an EKS Cluster the destruction can hang for hours as the cluster is destroyed before manifest resources are disposed.
In the case below the destroy hangs on the
'server-api'
manifest resource. At the point of the hang the cluster and instance have already been destroyed.In the case below the manifest is likely to be slow to remove as kubernetes needs to perform a couple of actions to unregister the load balancers defined in this manifest.
All other items in the stack are destroyed and the stack is hung waiting for the deletion of a resource of type
Custom::AWSCDK-EKS-KubernetesResource
Reproduction Steps
What did you expect to happen?
Either the manifest destruction should complete before the cluster is destroyed or the manifest destruction should be skipped altogether as the cluster no longer exists.
What actually happened?
The deployment hung trying to remove the manifest definition from a dead cluster.
Environment
Other
This is :bug: Bug Report