Closed cpanato closed 2 years ago
@cpanato: The label(s) area/ci
cannot be applied, because the repository doesn't have them
will close because it is a duplicate of https://github.com/kubernetes/test-infra/issues/18551
/close
@cpanato: Closing this issue.
Reopening this to evaluate how the current release blocking and release informing policy compares with whats proposed in https://github.com/kubernetes/test-infra/issues/18599 .
/reopen
@alejandrox1: Reopened this issue.
I would recommend we strive to make the criteria something that can be enforced via tests or automation. I still think the final decision should come down to humans, but it's not clear to me how often people really check against adherence to these criteria.
Taking a look at release-blocking criteria
Have the average of 75% percentile duration of all runs for a week finishing in 120 minutes or less
This used to be charted; p75_duration
in http://storage.googleapis.com/k8s-metrics/job-health-latest.json is daily not weekly
Run at least every 3 hours
If every job is a prowjob, we could statically check the job configs that use interval
; if not we could approximate by using runs
from http://storage.googleapis.com/k8s-metrics/job-health-latest.json and alert if it's less than 8
Be able to pass 3 times in a row against the same commit
We don't measure this currently, is it possible for us to do so? Or should we use some other measure?
Be Owned by a SIG, or other team, that is responsive to addressing failures, and whose alert email is configured in the job.
Ownership is enforced via static checks against the testgrid config. "That is responsive" though, I'm not sure how we measure that?
Have passed 75% of all of its runs in a week, and have failed for no more than 10 runs in a row
This used to be charted; failure_rate
in http://storage.googleapis.com/k8s-metrics/job-health-latest.json is daily not weekly
I think testgrid's summary page shows how many out of 10 recent columns passed, but I'm not sure if we measure this over time?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
added this to my next week backlog
Took my freedom to rename the issue to optically match with others.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
I would like to move this over to kubernetes/sig-testing with the intent of tackling this in v1.22, any objections?
I would like to move this over to kubernetes/sig-testing with the intent of tackling this in v1.22, any objections?
Sounds good! Thank you for catching up with this :pray:
/sig testing
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
/reopen /remove-lifecycle rotten /lifecycle frozen
@spiffxp: Reopened this issue.
@cpanato: The label(s) area/release-eng, area/ci
cannot be applied, because the repository doesn't have them.
/sig testing
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
Describe the CI policy for jobs, like:
We have a policy for blocking and informing jobs, https://github.com/kubernetes/sig-release/blob/master/release-blocking-jobs.md . If we compare this policy with what is proposed in https://github.com/kubernetes/test-infra/issues/18599, what would we add? what would we change? We should evaluate what changes we need to make to help ensure we are acting on useful information and check tht CI jobs are maintained and well.
/area release-eng /area ci /kind documentation /priority important-soon /milestone v1.20