Open goern opened 4 years ago
This can be generalized to any message producer in deployment.
This can be generalized to any message producer in deployment.
Except for user-facing api.
Hi there :wave:, I'm willing to help you regarding to adding liveness and readiness to package-update-job
CronJob in your OKD cluster.
Do you have 5 minutes to onboard me on your release management process ?
Additionally where does the package-update-job
Kubernetes recipe live ?
Thanks, Leslie
Hi!
thanks for your interest!
We deploy from https://github.com/thoth-station/thoth-application repo, you can find package-update related bits in https://github.com/thoth-station/thoth-application/blob/master/package-update/base/cronjob.yaml
There is already existing liveness probe that could be changed:
It's worth to consider if the liveness probe should not be more sophisticated. See also related discussion at https://github.com/robinhood/faust/issues/286.
Sadly we do not have any public instance, accessible to let you test your changes but we can definitely cooperate.
F.
CC @KPostOffice @saisankargochhayat @pacospace
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
@fridex package-update is a cronjob, do probes make sense here?
@fridex package-update is a cronjob, do probes make sense here?
It makes sense to have a mechanism to kill the pod if it failed for some reason. Previously, we had issues from time to time, that a pod was stuck in pending state or in running state (but python interpreter was not running) due to some cluster issue. To prevent that, it might be good idea to configure activeDeadlineSeconds
, also for other cronjobs we have.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@sesheta: Closing this issue.
/remove-lifecycle rotten /reopen /triage accepted
@fridex: Reopened this issue.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@sesheta: Closing this issue.
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
.
The bot was misbehaving at that time, as the issue was not flagged as rotten at that point. Fixing and adding a priority
/reopen /remove-lifecycle rotten /priority backlog
@codificat: Reopened this issue.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/lifecycle frozen
/sig devsecops I don't think we receive traffic in the pod, so a readiness probe would not really make sense. For a liveness probe, isn't it simpler to simply exit with and error code while logging the error ? Instead of adding code to check, since it's only to SIGTERM anyway...
/sig devsecops I don't think we receive traffic in the pod, so a readiness probe would not really make sense. For a liveness probe, isn't it simpler to simply exit with and error code while logging the error ? Instead of adding code to check, since it's only to SIGTERM anyway...
See the comment above about using activeDeadlineSeconds
instead. This issue could probably do with an edit.
It makes sense to have a mechanism to kill the pod if it failed for some reason. Previously, we had issues from time to time, that a pod was stuck in pending state or in running state (but python interpreter was not running) due to some cluster issue. To prevent that, it might be good idea to configure
activeDeadlineSeconds
, also for other cronjobs we have.
Not the same level though. activeDeadlineSeconds
is for failing the Job, regardless of the pod(s) (excerpt from the doc: "The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded")
This issue could probably do with an edit.
Agreed. @goern could you clarify what the end goal is ? In particular, re-reading the description:
add a liveness/readiness probe to faust producer deployment, so that we can check if package-update is working.
we should be able to do a basic testing if a new version of package-update is deployable and runnable.
-> seems like two differents things (Operations vs Testing).
from my point of view, it is an operational problem: package-update is a critical component, therefore we need to observe if it is working correctly. activeDeadlineSeconds
seems to be a technical solution to 'an auto-healing attempt', but it does not help with observing this service.
Maybe we should close this and #49 and restate
As a Thoth Operator, I want to observe the package-update-job, so that I can figure out if it is being executed, and so that in the case of its failure a support issue is opened.
wdygt?
On Wed, Sep 21, 2022 at 11:49:42PM -0700, Christoph Görn wrote:
from my point of view, it is an operational problem: package-update is a critical component, therefore we need to observe if it is working correctly.
activeDeadlineSeconds
seems to be a technical solution to 'an auto-healing attempt', but it does not help with observing this service.
That sums it up. activeDeadlineSeconds
would prevent a deadlock/livelock to goes undetected for
too long.
Maybe we should close this and #49 and restate
As a Thoth Operator, I want to observe the package-update-job, so that I can figure out if it is being executed, and so that in the case of its failure a support issue is opened.
wdygt?
Yes, I think we should rephrase the issue.
The metrics are already there for observing job state (kube-state-metrics has kube_job_failed
it
seems) so we would create an alert and maybe something like
https://github.com/m-lab/alertmanager-github-receiver for hooking it up into github issues.
By the way, that approach (activeDeadlineSeconds
or more broadly, timeout, which I suppose can be
expressed in different ways for argo/tekton etc) would scale to other jobs we have.
For example, https://github.com/thoth-station/thoth-application/issues/2604 was an instance of a job not finishing (well the pod was crashing but was restarted).
Is your feature request related to a problem? Please describe. add a liveness/readiness probe to faust producer deployment, so that we can check if package-update is working.
Describe the solution you'd like
Describe alternatives you've considered no probes
Additional context we should be able to do a basic testing if a new version of package-update is deployable and runnable.