kubernetes / kubectl

Issue tracker and mirror of kubectl code
Apache License 2.0
2.87k stars 921 forks source link

Forcing `kubectl log` to wait when container is creating #1227

Open nalepae opened 2 years ago

nalepae commented 2 years ago

What would you like to be added:

When we do kubectl logs <pod> [-f] on a container in creation, we get this error message:

Error from server (BadRequest): container "<container>" in pod "<pod>" is waiting to start: [ContainerCreating, PodInitializing]

In this situation, in general we retry to launch the kubectl logs <pod> [-f] until the container is created.

A nice feature would be to have a the kubectl logs command to block until the container is created, so we run this command only one time.

Why is this needed: To simplify the life of the user.

brianpursley commented 2 years ago

I guess you could use kubectl wait, then do kubectl logs after that.

Would something like this work?

kubectl apply -f foo.yaml && kubectl wait --for=condition=Ready pod/foo && kubectl logs foo

If so, you could turn that into a script and make it a plugin for convenience.

brianpursley commented 2 years ago

See also https://github.com/kubernetes/kubernetes/issues/79547

mpuckett159 commented 2 years ago

/triage accepted

Probably best to check if the container is "creating" (what ever the keyword is for it) when doing a kc logs -f and if it is we set up a wait loop, probably with a default timeout or something. Would likely need to support waiting for each container in separate contexts/goroutines so a pod with a bunch of containers doesn't sit around waiting for all the containers to become ready before outputting any logs.

Fonger commented 2 years ago

I guess you could use kubectl wait, then do kubectl logs after that.

Would something like this work?

kubectl apply -f foo.yaml && kubectl wait --for=condition=Ready pod/foo && kubectl logs foo

If so, you could turn that into a script and make it a plugin for convenience.

Sometimes we want to follow 50 pods logs like this:

kubectl logs -l name=my-service-xxx --follow --max-log-requests 50

However, some pods are being creating and the whole requests will fail. We need to wait for several creating pods and follow the running pods in the mean time.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

mpuckett159 commented 1 year ago

/remove-lifecycle stale

DavidPerezIngeniero commented 1 year ago

Any workaround?

homm commented 1 year ago

Looks like wait --for=condition=Ready doesn't work for jobs. Any workaround for this?

Filipoliko commented 1 year ago

@homm I went with a bit ugly while loop to solve this.

while true ; do kubectl logs job/foo -f 2>/dev/null && break || continue ; done

Note, that there is no timeout with this solution.

DavidPerezIngeniero commented 1 year ago

@homm I went with a bit ugly while loop to solve this.

while true ; do kubectl logs job/foo -f 2>/dev/null && break || continue ; done

Note, that there is no timeout with this solution.

My solution (also ugly) when launching a container called sbt in a job called build:

filter="-l job-name=build"
for i in {1..70}; do
  case $(k get po $filter -o jsonpath='{.items[*].status.containerStatuses[?(@.name=="sbt")].state.terminated.reason}') in
    *OOMKilled*)
      echo SBT has been killed for low memory
      k get po | grep build-
      k get ev | grep build-
      k top no
      exit 1;;
    "Error")
      echo SBT has ended with errors
      k logs $filter -c sbt
      exit 1;;
  esac
  msj=$(k get po $filter -o jsonpath='{.items[*].status.containerStatuses[?(@.name=="sbt")].state}')
   case $msj in
    *terminated*)
      echo
      echo Detected container SBT has ended
      success=1
      break;;
    *waiting*)
      echo -n .
      sleep 2;;
    *running*)
      echo
      echo Detected container SBT under execution
      success=1
      break;;
    *)
      echo -n '*'
      sleep 2;;
   esac
done
[[ $success != 1 ]] && {
  echo $msj
  echo Too much waiting for container SBT to be created
  exit 1
}
k logs $filter -c sbt -f || {
  echo Fails getting logs
  exit 1;
}
ankritisachan commented 1 year ago

/assign

k8s-triage-robot commented 1 month ago

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted