kubernetes-sigs / kueue

Kubernetes-native Job Queueing
https://kueue.sigs.k8s.io
Apache License 2.0
1.31k stars 229 forks source link

Improve liveness and readiness probes accuracy #1546

Closed astefanutti closed 1 month ago

astefanutti commented 7 months ago

What would you like to be added:

The liveness and readiness probes should succeed only when the operator is fully operational. This means they should report a health status only when all the components, i.e, webhooks, extension API servers, controllers, have started and are servicing requests.

Why is this needed:

At the moment, the liveness and readiness probes report a healthy status unconditionally, as soon as the controller-runtime manager has started, despite the operator and its components (webhooks, extension API servers, controllers) may not be operational yet.

That can happen over the period of time it takes for the certificates generated by cert-controller to be propagated into the secret volume mount for example.

In such cases, having the probes reporting an accurate status can help recovering issues, and other components correctly waiting for the operator readiness.

Completion requirements:

This enhancement requires the following artifacts:

The artifacts should be linked in subsequent comments.

mimowo commented 6 months ago

I believe this issue is already addressed with https://github.com/kubernetes-sigs/kueue/pull/1676 (and cherry-picked to 0.5.x)/ @astefanutti PTAL, if so then we can close.

astefanutti commented 6 months ago

@mimowo thanks for the update. I'd say #1676 fixes most of this issue.

There are a couple of things in the scope of this issue that we may still want to address like:

So I think we can close this one, and we can create some finer-grained ones for the two left-overs above. WDYT?

mimowo commented 6 months ago
  • Including the status of the visibility API server in the readiness check

Seems like a valid improvement indeed, otherwise we may be getting errors from the server.

  • Improving the accuracy / usefulness of the liveness check

Any concrete example, other than the visibility API server, where we could improve accuracy?

So I think we can close this one, and we can create some finer-grained ones for the two left-overs above. WDYT?

I prefer more fine-grained issues, so that we can prioritize / close them independently.

astefanutti commented 6 months ago
  • Including the status of the visibility API server in the readiness check

Seems like a valid improvement indeed, otherwise we may be getting errors from the server.

Agreed.

  • Improving the accuracy / usefulness of the liveness check Any concrete example, other than the visibility API server, where we could improve accuracy?

I wonder if probing for the webhooks in the liveness probe, as done in the readiness probe now with #1676, would be useful to mitigate the situation where cert-controller might fail generating / injecting the certificates at start time, and possibly over cert rotation. Do you think that would be an improvement over the current liveness probe implementation?

So I think we can close this one, and we can create some finer-grained ones for the two left-overs above. WDYT?

I prefer more fine-grained issues, so that we can prioritize / close them independently.

Sounds good, I'll create them and close this one.

mimowo commented 6 months ago

I wonder if probing for the webhooks in the liveness probe

Oh, interesting. I assumed the liveness probe = readiness probe, but it indeed the liveness probe = health probe in controller manager: https://github.com/kubernetes-sigs/controller-runtime/blob/7032a3cc91d2afc4c2d54e4a4891cf75da9f75f5/pkg/manager/internal.go#L281-L285.

And currently, the health probe is just ping in Kueue. So, yes I think using the liveness probe checking webhook server is better so that we can see if the server is down. cc @trasc @alculquicondor

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

tenzen-y commented 3 months ago

/remove-lifecycle stale

mbobrovskyi commented 1 month ago

/assign

mimowo commented 1 month ago

@astefanutti since this question was posted we have added the readiness probe to Kueue in https://github.com/kubernetes-sigs/kueue/pull/1676 (and later improved in follow ups).

Do we still have some known gaps to cover in the readiness probe, or scenarios that we want to cover in the liveness probe?

If not, I would suggest to park this issue (close) until we have such scenarios.

astefanutti commented 1 month ago

@mimowo apologies for the late reply.

I think what's been done to improve the readiness mostly addresses this issue, and I agree with your suggestion to close it, and create some finer-grained ones if needed.

astefanutti commented 1 month ago

/close

k8s-ci-robot commented 1 month ago

@astefanutti: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kueue/issues/1546#issuecomment-2255548899): >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.