Open ptone opened 5 years ago
I tried testing this and I don't think I'm seeing the failure that you are:
kubectl apply -f pg.yaml && kubectl wait --for=condition=Ready pod -l app="postgres" --timeout 2m
Output (after waiting at a blank prompt for 20 or so seconds to download the postgres docker container for the first time and run it):
deployment.apps/postgres created
pod/postgres-86cb4984cf-bhnbh condition met
What error message did you get?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/sig cli /area kubectl /kind bug
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
This is definitely a valid issue, kubectl wait
is still young command. From quick look at code the problem seems to be with this line https://github.com/kubernetes/kubernetes/blob/8dd93ca94c253c161e9affcd22ffa0e25c8e683d/staging/src/k8s.io/kubectl/pkg/cmd/wait/wait.go#L230 which re-uses ResourceFinder
, we most probably need to support some kind of wait for create, similarly how we wait for other conditions.
/remove-kind bug /kind feature
/priority backlog
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten /lifecycle frozen
we most probably need to support some kind of wait for create, similarly how we wait for other conditions.
@soltysh are you thinking this would be a new —for=create
option that works similar to how it waits for deletion, but instead waits for the resource to exist (without consideration for its status)?
A use of the wait command is useful in sequential automated deployment of kube resources.
However the command is currently rather brittle in automation as if the resources are not yet created, wait simply exists with an error
for example in a snippet like this:
The wait command errors out, as pods matching those labels are still being created. This means for automation, there needs to be some other "pre-wait" step - which if it is in the form of bash until loop, might was well just do the job of wait.
I'd propose that kubectl wait absorb doesNotExist style errors within the timeout period, but emit them at timeout.