Open saschagrunert opened 11 months ago
Clear the latest milestone on master PR's if they got applied before code freeze
I'm not sure if Prow plugin is the best fit for that. Plugins usually react to an event (received via GitHub Webhooks) for a concrete PR. While it's technically doable to remove milestone from multiple PRs, I'm slightly concerned about issues that might arise from that (e.g. concurrency issues).
I think it would be better to have a ProwJob to handle that, but we would need to handle credentials for GitHub in some way.
Check that PRs have lgtm and approved labels before applying the latest milestone during code freeze
This is not going to solve another part of the issue: someone who isn't on the release team can use /milestone
to set the milestone and get the PR merged without ACK from the Release Team Leads. I think it might be better to change the plugin that handles the /milestone
command to restrict who can apply certain milestones. For example, once the code freeze is in effect, the latest milestone can be set on PRs only by the Release Team Leads and the Release Team Release Signal team. This is something similar to what we did for the cherry-pick-approve
plugin recently.
I think it would be better to have a ProwJob to handle that
Something similar to the ci-fast-forward
job would work: https://github.com/kubernetes/test-infra/blob/94bd5f880ecc866b2076c116a017f55d80cf6906/config/jobs/kubernetes/sig-k8s-infra/trusted/releng/releng-trusted.yaml#L338-L368
For example, once the code freeze is in effect, the latest milestone can be set on PRs only by the Release Team Leads and the Release Team Release Signal team.
I added the people restriction to the first comment. The main issue was that the milestone got applied before code freeze, which can be addressed with the first point.
Update the issue to reflect the current state. One problem I see right now is that code freeze is nothing we can check from a technical perspective. :thinking:
Edit: Ah we could check for the prow config: https://github.com/kubernetes/test-infra/pull/31164/files
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen
Main improvements are right now:
Original conversation: https://kubernetes.slack.com/archives/C2C40FMNF/p1700057962137919
cc @kubernetes/release-managers @pohly @Priyankasaggu11929