kubernetes-sigs / prow

Prow is a Kubernetes based CI/CD system developed to serve the Kubernetes community. This repository contains Prow source code and Hugo sources for Prow documentation site.
https://docs.prow.k8s.io
Apache License 2.0
129 stars 99 forks source link

prow: handle the case of re-triggering an expired GitHub workflow #192

Closed MadhavJivrajani closed 1 week ago

MadhavJivrajani commented 5 months ago

Original issue here: https://github.com/kubernetes/test-infra/issues/31645


https://github.com/kubernetes/test-infra/pull/31132 added the functionality of re-triggering only failed jobs in a GitHub workflow. However, the trigger plugin does not handle the cases when the failed workflow is expired (came up in discussion here https://github.com/kubernetes/test-infra/pull/30054#issuecomment-1637641500, cc @Priyankasaggu11929)

Workflows or jobs in workflows can only be re-run within min(30 days, log retention period) (or atleast that is my understanding based on https://docs.github.com/en/actions/managing-workflow-runs/re-running-workflows-and-jobs). Considering our repos have a 90 day log retention period, seems like we can only ever re-run jobs in a 30 day time-frame.

Regardless, if a repo uses prow to trigger GitHub workflows, /retest on expired workflows runs will yield no result from prow. We should probably make the bot post a comment indicating that one or more workflow run has expired.

Step 1: Modify the trigger logic here to include a comment if at least one of the runs is expired:

https://github.com/kubernetes-sigs/prow/blob/924d31f0c454382aa6b721f442ea1ac39275f4d2/pkg/plugins/trigger/generic-comment.go#L160

The status code returned by an API call to re-run a failed, expired workflow run is 403 (unfortunately, GitHub does not document this), with an error message. There are 2 issues with trying to rely on this returned 403:

  1. 403 can indicate the need to retry the request. The Prow GitHub client will attempt a retry if status code of 403 is returned, but it will retry only if the error returned by the sending the request is nil

https://github.com/kubernetes-sigs/prow/blob/924d31f0c454382aa6b721f442ea1ac39275f4d2/pkg/github/client.go#L946-L958

  1. The error returned by executing a request to rerun an expired workflow run is non-nil, to verify I ran the following command:
❯ curl -L -X POST -s -w "%{http_code}\n" \                                                                                                                                     ─╯
  -H "Accept: application/vnd.github+json" \
  -H "Authorization: Bearer MY_GITHUB_TOKEN" \
  -H "X-GitHub-Api-Version: 2022-11-28" \
  https://api.github.com/repos/kubernetes/utils/actions/runs/5577562429/rerun-failed-jobs

Note: you may not be able to run the exact same command above since you may not have write access on kubernetes/utils.

The output:

{
  "message": "Unable to retry this workflow run because it was created over a month ago",
  "documentation_url": "https://docs.github.com/rest/actions/workflow-runs#re-run-failed-jobs-from-a-workflow-run"
}
403

The same error is propagated into the Prow GitHub client as well.

As a result, the way the request is handled in the Prow client is that if an error is returned after all retries are done, it returns a status code of 0, indicating that the client could not communicate with the server using HTTP (which is clearly not true). I've demonstrated this here with a unit test: https://github.com/MadhavJivrajani/test-infra/commit/38c5f900e22c844719e9786e0a0f64b56e705505

The only way (for now) to know if we are trying to rerun an expired workflow run is to parse the error message itself. We can probably do:

strings.Contains(err.Error(), "Unable to retry this workflow run because it was created over a month ago")

If the above condition is true, post a comment saying there are expired workflow runs. It might also be worth including what can be done to mitigate this, one popular workaround I see online is to rebase or force push an empty commit.

Step 2: Add tests

There aren't any tests for triggering GitHub workflows in general. If we do this, we need to ensure we have test coverage.

Follow ups

Its worth looking into the returning of status code 0 considering this is a valid use case, but this is a follow up. Also - note that we will end up retrying and backing off every time we try to rerun an expired workflow run. Maybe there's a way to short-circuit that.

/sig contributor-experience testing

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 week ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 week ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/prow/issues/192#issuecomment-2468586557): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.