Open erwinvaneyk opened 5 years ago
I think it is probably okay to continue with deletion, skipping over resources that we do not have permissions to delete, assuming that we also attempt to describe the resource first.
It's probably a safe bet that if we lack permissions to describe or delete the resource, then we most likely lacked the permissions to create the resource and the chance of orphaning a resource would be slim to none.
This might get a bit tricky around some of the resources that we manage through transitive dependencies of other resources, so it might require some special handling on a case by case basis.
@randomvariable please add some info on the dependency ordering of AWS components
@randomvariable bump
Trying to de-scope v0.5. Moved to Next.
Definitely next. Quite a bit of refactoring to be done to make this happen.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/lifecycle frozen
/remove-lifecycle frozen
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the PR is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/kind bug
What steps did you take and what happened:
What did you expect to happen: Although this specific issue is comes down to a misconfiguration on my part, it seems like this issue would be there for any type of non-transient error during the cluster deployment.
So, I would expect two things to happen:
Environment:
kubectl version
): v1.16.2If this is an actual issue that is within the scope of capa, I would be happy to contribute a patch myself. 🙂