codecov / codecov-action

GitHub Action that uploads coverage to Codecov :open_umbrella:
https://www.codecov.io
MIT License
1.48k stars 208 forks source link

Retry if upload fails #926

Open alexandrnikitin opened 1 year ago

alexandrnikitin commented 1 year ago

Hi, Time to time we get 503 errors while uploading the data. The log looks like this:

...
[2023-02-24T17:38:21.359Z] ['verbose'] tag
[2023-02-24T17:38:21.359Z] ['verbose'] flags
[2023-02-24T17:38:21.359Z] ['verbose'] parent
[2023-02-24T17:38:21.360Z] ['info'] Pinging Codecov: https://codecov.io/upload/v4?package=github-action-2.1.0-uploader-0.3.5&token=*******....
[2023-02-24T17:38:21.360Z] ['verbose'] Passed token was 36 characters long
[2023-02-24T17:38:21.360Z] ['verbose'] https://codecov.io/upload/v4?package=github-action-2.1.0-uploader-0.3.5&...
        Content-Type: 'text/plain'
        Content-Encoding: 'gzip'
        X-Reduced-Redundancy: 'false'
[2023-02-24T17:38:23.332Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 503 - upstream connect error or disconnect/reset before headers. reset reason: connection failure
[2023-02-24T17:38:23.332Z] ['verbose'] The error stack is: Error: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 503 - upstream connect error or disconnect/reset before headers. reset reason: connection failure
    at main (/snapshot/repo/dist/src/index.js)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
[2023-02-24T17:38:23.332Z] ['verbose'] End of uploader: 3001 milliseconds

It would be great to have a retry mechanism with some defined timeout.

LucasXu0 commented 1 year ago

Hi, there. I strongly agree with @alexandrnikitin. It kills time because I have to retry the whole job if the codecov action fails.

[2023-03-09T18:01:33.255Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 404 - {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
[2023-03-09T18:01:33.256Z] ['verbose'] The error stack is: Error: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 404 - {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
    at main (/snapshot/repo/dist/src/index.js)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
[2023-03-09T18:01:33.256Z] ['verbose'] End of uploader: 1829 milliseconds
ahukkanen commented 1 year ago

This would be very helpful.

We fixed the initial problem "Unable to locate build via Github Actions API." using some of the suggestions in the several different dscussions.

It has been running OK for few weeks now but now we started to see different errors, such as:

[2023-03-13T18:04:08.821Z] ['info'] Pinging Codecov: https://codecov.io/upload/v4?package=github-action-3.1.1-uploader-0.3.5&token=*******&branch=fix%2F10518&build=4407915657&build_url=https%3A%2F%2Fgithub.com%2Fdecidim%2Fdecidim%2Factions%2Fruns%2F4407915657&commit=538d19c980fa26abebbdb736c28488a81c69ac8a&job=%5BCI%5D+Meetings+%28unit+tests%29&pr=10519&service=github-actions&slug=decidim%2Fdecidim&name=decidim-meetings&tag=&flags=decidim-meetings&parent=
[2023-03-13T18:04:27.515Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 500 - {"error": "Server Error (500)"}

And

[2023-03-13T18:16:44.977Z] ['info'] Pinging Codecov: https://codecov.io/upload/v4?package=github-action-3.1.1-uploader-0.3.5&token=*******&branch=fix%2F10518&build=4407915631&build_url=https%3A%2F%2Fgithub.com%2Fdecidim%2Fdecidim%2Factions%2Fruns%2F4407915631&commit=538d19c980fa26abebbdb736c28488a81c69ac8a&job=%5BCI%5D+Meetings+%28system+public%29&pr=10519&service=github-actions&slug=decidim%2Fdecidim&name=decidim-meetings-system-public&tag=&flags=decidim-meetings-system-public&parent=
[2023-03-13T18:17:15.139Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) HeadersTimeoutError: Headers Timeout Error

It would be really helpful if the codecov action waited few seconds and retried so that we don't have to rerun the whole action which can take up to 30 mins (depending on the workflow).

pavolloffay commented 1 year ago

+1

Licenser commented 1 year ago

Yes please, we've this issue fairly constantly and it's incredibly annoying.

Licenser commented 1 year ago

To put this in perspective, all those PR's failed because the codecov upload failed. This means we got to re-run the jobs and pay for minutes again :( this issue is really painful

grafik

frenck commented 1 year ago

We have been experiencing a lot of similar issues as described above. The amount of jobs that fail is really getting annoying, to the state that reviewers aren't even bothering with restarting the CI.

We've limited runtime for the codecov jobs to prevent them from running for hours and exhausting our CI runners. On non-open source projects, this can be quite costly when GitHub bills the org.

Anything we can provide to resolve this issue?

../Frenck

epenet commented 1 year ago

Just had a similar issue, this time with error code 502. https://github.com/home-assistant/core/actions/runs/4618964416/jobs/8167147703

[2023-04-05T13:27:00.542Z] ['error'] There was an error running the uploader: Error uploading to https://codecov.io: Error: There was an error fetching the storage URL during POST: 502 - 
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>502 Server Error</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Server Error</h1>
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
<h2></h2>
</body></html>
pmalek commented 1 year ago

We're also seeing the above mentioned 502s.

What we ended up doing is using a retry mechanism like https://github.com/Wandalen/wretry.action to retry the upload. fail_ci_if_error set to false is not an option really if someone cares about the coverage reports.

Stael commented 1 year ago

Facing the same issue. Would love a retry ❤️

adiroiban commented 1 year ago

A retry would be awesome.

Here is another failure on GitHub Actions - https://github.com/twisted/twisted/actions/runs/4780033926/jobs/8497499137?pr=11845#step:13:42

[2023-04-23T19:28:07.607Z] ['error'] There was an error running the uploader:
Error uploading to [https://codecov.io:](https://codecov.io/) Error: getaddrinfo EAI_AGAIN codecov.io
citizen-stig commented 1 year ago

This will be such a game changer in CI experience.

If CI is periodically fails because of reasons like network error, people tend to ignore other failures.

GCHQDeveloper314 commented 1 year ago

We've had the following (same problem as @LucasXu0 reported above) which prevented the upload from working.

[2023-06-05T13:59:02.657Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 404 - {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}

This looks to have been caused by a temporary GitHub API outage, but because we don't have fail_ci_if_error enabled the coverage on our main branch became incorrect as only a portion of the required coverage data was uploaded.

I would suggest a new optional argument for codecov-action allowing a given number of retries and an inter-retry cooldown to be specified.

As a workaround, instead of performing the coverage upload as part of the same job as the build & test this can be split out into a separate job. The upload-artifact action could be used to store the raw coverage data as an artifact which a later codecov job would retrieve and upload. If the codecov upload failed then all that would need to be rerun is the failed codecov job. This job would be just the upload, so it would avoid rerunning any build/test, saving many GitHub runner minutes.

eivindjahren commented 1 year ago

We have exactly the same behavior in https://github.com/equinor/ert and an option to retry on connection failures would be awesome.

LucasXu0 commented 1 year ago

What we ended up doing is using a retry mechanism like https://github.com/Wandalen/wretry.action to retry the upload. fail_ci_if_error set to false is not an option really if someone cares about the coverage reports.

Using this retry action in my project significantly reduces the failure count. It serves as a workaround for the time being.

imnasnainaec commented 1 year ago

@GCHQDeveloper314 Would you mind sharing the workflow with your solution. I've been able to use https://github.com/actions/upload-artifact in one job that generates a coverage .xml file and https://github.com/actions/download-artifact in another job to retrieve the file and upload it to Codecov, but Codecov says it's an "unusable report".

imnasnainaec commented 1 year ago

I figured it out. I needed to check out the repository before uploading the coverage report:

jobs:
  test_coverage:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [18]
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
      - name: Use Node.js ${{ matrix.node-version }}
        uses: actions/setup-node@v3
        with:
          node-version: ${{ matrix.node-version }}
      - run: npm ci
      - run: npm run test-frontend:coverage
        env:
          CI: true
      - name: Upload coverage artifact
        uses: actions/upload-artifact@v3
        with:
          if-no-files-found: error
          name: coverage
          path: coverage/clover.xml
          retention-days: 7

  upload_coverage:
    needs: test_coverage
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
      - name: Download coverage artifact
        uses: actions/download-artifact@v3
        with:
          name: coverage
      - name: Upload coverage report
        uses: codecov/codecov-action@v3
        with:
          fail_ci_if_error: true
          files: clover.xml
          flags: frontend
          name: Frontend
GCHQDeveloper314 commented 1 year ago

@imnasnainaec At the time of suggesting that workaround in here I hadn't implemented it yet. When I did this, I also found that a repo checkout is required by codecov. I believe this is because it needs the git history. My solution can be seen here - it is very similar to what you've posted above. With this approach, if codecov upload fails then only a single step (which takes under 1 minute) needs to be rerun to retry the upload - no need to rerun any tests which saves us many minutes.

ReenigneArcher commented 11 months ago

This issue occurs because the GitHub api takes a little bit to update with new events. I have an action that relies on the GitHub events API, where I experienced a similar situation. Adding a simple 10 second wait, with a retry loop solved it. In most cases it works on the first attempt. Just for reference (https://github.com/LizardByte/setup-release-action/pull/40)

Please add retry logic to this action.

[2023-12-11T15:16:32.395Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 404 - {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}

Tokens/secrets are not an option for open source projects that accept PRs from forks.

Re-running the action also does not solve it. I have one workflow I have ran 7 times, and it has failed to upload every time... likely because my tests complete very quickly, and there's almost no possibility for it to be in the GitHub actions API at that point.

ssbarnea commented 8 months ago

It seems that the need for a exponential backoff automatic retry is more urgent these days. I seen

The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.

This is as clear as it sounds and implementing retry in python is not really hard.

tomage commented 7 months ago

+1 - wastes a lot of time if upload fails, and the entire action must be re-run (usually involves running all unit-tests)

eivindjahren commented 7 months ago

I have attempted to add a 30 second sleep and retry and it simply isn't enough. If a retry is to be added, it needs to be more than that to work consistently.

ReenigneArcher commented 7 months ago

I have attempted to add a 30 second sleep and retry and it simply isn't enough. If a retry is to be added, it needs to be more than that to work consistently.

In v4 you get a more detailed error message. But basically tokenless uploads are failing more often due to GitHub api limits.

error - 2024-04-16 13:51:14,366 -- Commit creating failed: {"detail":"Tokenless has reached GitHub rate limit. Please upload using a token: https://docs.codecov.com/docs/adding-the-codecov-token. Expected available in 459 seconds."}

It seems like the action uses a central codecov owned GitHub API token. That is likely because the built in GITHUB_TOKEN doesn't have access to the events scope https://docs.github.com/en/actions/security-guides/automatic-token-authentication#permissions-for-the-github_token and using a GitHub app (https://docs.github.com/en/actions/security-guides/automatic-token-authentication#granting-additional-permissions) still wouldn't work for fork PRs as far as I understand.

In any event the information required for a reliable retry is already available in the logs. In my example waiting ~8 minutes is better than rebuilding the project from zero which can sometimes take ~30 minutes. Even just avoiding the manual "re-run" click would be worth it.

Dreamsorcerer commented 7 months ago

But basically tokenless uploads are failing more often due to GitHub api limits.

Given that the latest version requires a token, this is not the issue that most people are reporting here, and possibly not worth the extra work to extract the retry time from the message. The primary issue is with Codecov's servers themselves, which occasionally fail to accept an upload. As shown above (https://github.com/codecov/codecov-action/issues/926#issuecomment-1964679977) this usually suggests retrying after around 30 seconds. This issue is just asking Codecov to follow the advice from their own server.

eivindjahren commented 5 months ago

So I believe this behavior recently changed a bit. You now get the following if you use forks:

info - 2024-05-27 07:27:20,004 -- ci service found: github-actions
info - 2024-05-27 07:27:20,294 -- The PR is happening in a forked repo. Using tokenless upload.
info - 2024-05-27 07:27:20,478 -- Process Commit creating complete
error - 2024-05-27 07:27:20,479 -- Commit creating failed: {"detail":"Tokenless has reached GitHub rate limit. Please upload using a token: https://docs.codecov.com/docs/adding-the-codecov-token. Expected available in 393 seconds."}

Note that the link is broken, and should really point to this: https://docs.codecov.com/docs/codecov-uploader#supporting-token-less-uploads-for-forks-of-open-source-repos-using-codecov . Turns out that codecov has a shared pool of github resources that gets rate limited. So I would suggest that if you have retry logic implemented, please be considerate about using those shared resources. Also, if codecov could give some guidance in how to avoid using tokenless upload in case of forked repo workflow then that would be great.

https://docs.codecov.com/docs/codecov-uploader#supporting-token-less-uploads-for-forks-of-open-source-repos-using-codecov

ReenigneArcher commented 5 months ago

Since they provide the expected time now (Expected available in 393 seconds.), maybe they could handle the retry logic using the time provided in their failure response.

Maybe they could allow us to use a public upload token for use in PRs, which would only have permission to add coverage information for repos/branches which aren't the origin one.

thomasrockhu-codecov commented 5 months ago

@ReenigneArcher thanks for your message. Initially, we tried to do retries after the expected time. However, since this is a blocking call, CI runs could potentially run for hours if they missed the window to upload.

That said, we are making changes to our system to decrease the number of GitHub API calls which will hopefully alleviate some of this pain.

Also, I am looking into adding retries as a feature to the Action. However, this may be slated for later next quarter.

Dreamsorcerer commented 5 months ago

However, since this is a blocking call, CI runs could potentially run for hours if they missed the window to upload.

That's a fair concern, but in most cases (all that I've seen?), the retry can happen after 30 seconds or so. While restarting the CI process (for many of us) takes more like 15+ mins and requires us manually rerunning it (versus it happening automatically without supervision).

The retry logic could be opt-in for those concerned that it might use too many minutes (though it should obviously also be capped at a sensible or configurable time limit).

eivindjahren commented 5 months ago

@Dreamsorcerer Yea, come to think of it. Something extremely wasteful seems to be happening. I just realized that triggering an upload of coverage data shouldn't consume anything from the github api! Is it fetching the commit every time a coverage data upload is happening? We upload 4 reports for each PR so 3 of those uploads should not need to interact with github.

It seems like you could have the github action upload whatever information you need to track and then go fetch what you need from github when it is requested as interacting with coverage data happens far less frequently than coverage report uploads.

ReenigneArcher commented 5 months ago

Also, if codecov is trying to use the events API, commits may not even appear there for up to 6 hours. I discovered that in another project of mine where I was using the events API.

image

https://docs.github.com/en/rest/activity/events?apiVersion=2022-11-28#list-repository-events

Dreamsorcerer commented 1 month ago

Seems there is a retry in there now, but it happens too fast. Getting 503 errors today, but it seems to make 3 attempts in about 4 seconds:

warning - 2024-09-20 17:30:38,120 -- Response status code was 503. --- {"retry": 0}
warning - 2024-09-20 17:30:38,121 -- Request failed. Retrying --- {"retry": 0}
warning - 2024-09-20 17:30:41,107 -- Response status code was 503. --- {"retry": 1}
warning - 2024-09-20 17:30:41,107 -- Request failed. Retrying --- {"retry": 1}
warning - 2024-09-20 17:30:42,337 -- Response status code was 503. --- {"retry": 2}
warning - 2024-09-20 17:30:42,337 -- Request failed. Retrying --- {"retry": 2}

Seems like it should spread out the retries over a minute or two..

tagatac commented 1 month ago

Ah, cool. This is progress. It looks like the short backoff is an intentional decision:

Being an iteractive tool I don't think we should use too big of a backoff period

_Originally posted by @giovanni-guidini in https://github.com/codecov/codecov-cli/pull/210#discussion_r1270728462_

I'll just link some of the related PRs in case any of the authors or reviewers have any opinions about increasing the backoff:

  1. https://github.com/codecov/codecov-cli/pull/210
  2. https://github.com/codecov/codecov-cli/pull/382
  3. https://github.com/codecov/codecov-cli/pull/452

@giovanni-guidini @scott-codecov @joseph-sentry @adrian-codecov

Dreamsorcerer commented 1 month ago

Well, as mentioned above, the correct time to retry at is included in the server response, so ideally it should just retry at that time (usually about 30 seconds).

thomasrockhu-codecov commented 1 month ago

@Dreamsorcerer to note, this is not always 30 seconds. In fact it is often closer to 1 hour. This caused issues before where retries would block CI for over 6 hours. We are doing other improvements so that we won't have to block for an hour.

Dreamsorcerer commented 1 month ago

If it's more than a couple of minutes, obviously don't retry. However, 100% of the times I've seen the CI fail due to this error, it has worked after rerunning the CI which takes a few minutes, so I can't say I've ever seen such a long delay.