Open alon-z opened 3 years ago
Interesting. Very possible there is an implicit dependency edge required that is not being captured, and that this only sometimes leads to a problem as these deletes end up being run in parallel and race with one another.
In case anyone comes across this, i was at least able to delete the stack by setting the PULUMI_K8S_DELETE_UNREACHABLE=true
env var in the shell before running pulumi down
. Not ideal of course, I would expect a delete operation to be fine when the cluster in question has already been deleted.
Destroying a cluster that was created with
roleMappings
option will sometimes fail as the cluster is destroyed before deleting the nodeAccess config map.Steps to reproduce
Expected: Cluster deleted Actual: