actions / upload-artifact

MIT License
3.23k stars 721 forks source link

HTTP 400 error on uploading artifacts. #233

Closed jimvandervoort closed 11 months ago

jimvandervoort commented 3 years ago

Describe the bug

Our ci systems sometimes returns HTTP 440 when using upload-artifact.

Version

Environment

Screenshots If applicable, add screenshots to help explain your problem.

Run/Repo Url Tried to reproduce our setup in jimvandervoort/gh-actions-error. No luck so far.

How to reproduce Not sure yet, our setup looks like this:

We build multiple themes from our frontend. These themes may or may not produce some of the same files across builds (since some themes only add files like new image resoruces, while others modify JS, causing a different webpack bundle to be created).

We build multiple themes in a matrix job.

Assets are all build under public/theme-name/dist/js/app/.[hash].js.

The ci uploads all js and css files in one folder: dist/js/app.[hash].js, removing the name of the theme from the path. The idea here is that if two themes build the same file, only one is included (since the hash of the file is in the filename).

Additional context

Error reported in run (I replaced the name of a specific directory with some-theme since the original name included the name of a private customer):

Run actions/upload-artifact@v2.2.4
  with:
    name: dist
    path: public/some-theme/
    if-no-files-found: warn
With the provided path, there will be 158 files uploaded
An error has been caught http-client index 1, retrying the upload
Error: read ECONNRESET
    at TLSWrap.onStreamRead (internal/stream_base_commons.js:201:27) {
  errno: 'ECONNRESET',
  code: 'ECONNRESET',
  syscall: 'read'
}
Exponential backoff for retry #1. Waiting for 6321.394493615066 milliseconds before continuing the upload at offset 0
An error has been caught http-client index 0, retrying the upload
Error: Client has already been disposed.
    at HttpClient.request (/home/runner/work/_actions/actions/upload-artifact/v2.2.4/dist/index.js:5694:19)
    at HttpClient.sendStream (/home/runner/work/_actions/actions/upload-artifact/v2.2.4/dist/index.js:5655:21)
    at UploadHttpClient.<anonymous> (/home/runner/work/_actions/actions/upload-artifact/v2.2.4/dist/index.js:7104:37)
    at Generator.next (<anonymous>)
    at /home/runner/work/_actions/actions/upload-artifact/v2.2.4/dist/index.js:6834:71
    at new Promise (<anonymous>)
    at module.exports.608.__awaiter (/home/runner/work/_actions/actions/upload-artifact/v2.2.4/dist/index.js:6830:12)
    at uploadChunkRequest (/home/runner/work/_actions/actions/upload-artifact/v2.2.4/dist/index.js:7102:46)
    at UploadHttpClient.<anonymous> (/home/runner/work/_actions/actions/upload-artifact/v2.2.4/dist/index.js:7139:38)
    at Generator.next (<anonymous>)
Exponential backoff for retry #1. Waiting for 6729.072137535379 milliseconds before continuing the upload at offset 0
Total file count: 158 ---- Processed file #56 (35.4%)
Finished backoff for retry #1, continuing with upload
Finished backoff for retry #1, continuing with upload
Error: Unexpected response. Unable to upload chunk to https://pipelines.actions.githubusercontent.com/on6vdJxD0Ms5Gsh156e7SduyglfNEuXgSLAZqo92LFT9Lo3GST/_apis/resources/Containers/12076920?itemPath=dist%2Fdist%2Fimg%2Fmain-square.e97b152dc5de42102f76.png
##### Begin Diagnostic HTTP information #####
Status Code: 400
Status Message: Bad Request
Header Information: {
  "cache-control": "no-store,no-cache",
  "pragma": "no-cache",
  "transfer-encoding": "chunked",
  "content-type": "application/json; charset=utf-8",
  "strict-transport-security": "max-age=2592000",
  "x-tfs-processid": "3863658e-c592-4c54-82b9-c37859782ce9",
  "activityid": "bd22899a-1007-404e-afc7-055c26a0dabc",
  "x-tfs-session": "bd22899a-1007-404e-afc7-055c26a0dabc",
  "x-vss-e2eid": "bd22899a-1007-404e-afc7-055c26a0dabc",
  "x-vss-senderdeploymentid": "a07ab14e-025a-39c3-8d53-788cd7ce207f",
  "x-frame-options": "SAMEORIGIN",
  "x-cache": "CONFIG_NOCACHE",
  "x-msedge-ref": "Ref A: 4D3FB11DE1B14886A37DA1CA2BE0853F Ref B: BLUEDGE1821 Ref C: 2021-06-28T20:45:08Z",
  "date": "Mon, 28 Jun 2021 20:45:08 GMT"
}
###### End Diagnostic HTTP information ######
Warning: Aborting upload for /home/runner/work/squares/squares/public/some-theme/dist/img/main-square.e97b152dc5de42102f76.png due to failure
Error: aborting artifact upload
Total size of all the files uploaded is 12198799 bytes
Finished uploading artifact dist. Reported size is 12198799 bytes. There were 86 items that failed to upload
Error: An error was encountered when uploading dist. There were 86 items that failed to upload.
99 commented 3 years ago

seeing the same issue, happens on Mac and self-hosted

jimvandervoort commented 3 years ago

@99 I have been unable to solve this issue, ended up uploading my artifacts to an S3 bucket using the AWS cli.

DmitrySharov commented 2 years ago

Actual error.

Error: Unexpected response. Unable to upload chunk to https://pipelines.actions.githubusercontent.com/TpHcyMMFQmnpS2gAIwipfQlCeRFQYvYjaFihb10o3AoBbT48sZ/_apis/resources/Containers/17554032?itemPath=.net-app%5Cwwwroot%5Cassets%5Cjs%5Cdemo.js
##### Begin Diagnostic HTTP information #####
Status Code: 400
Status Message: Bad Request
Header Information: {
  "cache-control": "no-store,no-cache",
  "pragma": "no-cache",
  "transfer-encoding": "chunked",
  "content-type": "application/json; charset=utf-8",
  "strict-transport-security": "max-age=2592000",
  "x-tfs-processid": "a1264ee8-9811-47fd-b621-14c5eb188cab",
  "activityid": "d82c5424-94c2-4f7f-8462-467a55705438",
  "x-tfs-session": "d82c5424-94c2-4f7f-8462-467a55705438",
  "x-vss-e2eid": "d82c5424-94c2-4f7f-8462-467a55705438",
  "x-vss-senderdeploymentid": "a07ab14e-025a-39c3-8d53-788cd7ce207f",
  "x-frame-options": "SAMEORIGIN",
  "x-cache": "CONFIG_NOCACHE",
  "x-msedge-ref": "Ref A: 24740862B63345E297D117EC69354365 Ref B: BL2EDGE1817 Ref C: 2021-12-08T11:22:03Z",
  "date": "Wed, 08 Dec 2021 11:22:02 GMT"
}
###### End Diagnostic HTTP information ######
sgvictorino commented 2 years ago

Having the same problem on windows-latest.

olebhartvigsen commented 2 years ago

I have the same error also, windows-latest:

Error: Unexpected response. Unable to upload chunk to https://pipelines.actions.githubusercontent.com/GzWTsAc8Tfkk7NAzxC0CrvDFhIPAaUpAFRbZzar8Syjs8JLKSw/_apis/resources/Containers/3142291?itemPath=ASP-app%5CApp_Plugins%5CExercise.css
##### Begin Diagnostic HTTP information #####
Status Code: 400
Status Message: Bad Request
Header Information: {
  "cache-control": "no-store,no-cache",
  "pragma": "no-cache",
  "transfer-encoding": "chunked",
  "content-type": "application/json; charset=utf-8",
  "strict-transport-security": "max-age=2592000",
  "x-tfs-processid": "dbcd0f50-a18e-4b5a-9184-06101c692ab9",
  "activityid": "32445705-393f-4869-91c5-b1a66e43f437",
  "x-tfs-session": "32445705-393f-4869-91c5-b1a66e43f437",
  "x-vss-e2eid": "32445705-393f-4869-91c5-b1a66e43f437",
  "x-vss-senderdeploymentid": "6be79bb9-f6f7-24a7-0b27-e718f2ab4200",
  "x-frame-options": "SAMEORIGIN",
  "x-cache": "CONFIG_NOCACHE",
  "x-msedge-ref": "Ref A: 7B60BF0CB4314C6497D704CAE0154C8F Ref B: PAOEDGE0616 Ref C: 2021-12-10T09:11:05Z",
  "date": "Fri, 10 Dec 2021 09:11:05 GMT"
}
###### End Diagnostic HTTP information ######
sgvictorino commented 2 years ago

Following the README's workaround for throttled requests, compressing the folder before uploading made it work for me, although the zip was nested (https://github.com/actions/upload-artifact/issues/39).

Perhaps 400 errors are sent by the server in place of 429 for us for whatever reason. Maybe this info about my folder upload attempts can help someone who knows about the rate limits:

nic11 commented 2 years ago

Appears that it may be caused by empty files. See this workflow run:

https://github.com/skyrim-multiplayer/skymp/runs/4490126560?check_suite_focus=true

Commit: https://github.com/skyrim-multiplayer/skymp/pull/629/commits/5ae9d058a6bcac6c729875be5d3c450f97dacc7d

As can be seen from the job log, action was failing, but, after removing the empty file, artifact upload was successful.

Another example:

Log diff shows that both runner image and this action were updated:

click to expand ```diff --- 1 2021-12-11 03:54:59.296217660 +0300 +++ 2 2021-12-11 03:55:15.176151694 +0300 @@ -1,8 +1,8 @@ Found online and idle hosted runner in the current repository's organization account that matches the required labels: 'windows-2019' Waiting for a Hosted runner in the 'organization' to pick this job... Job is waiting for a hosted runner to come online. -Job is about to start running on the hosted runner: GitHub Actions 5 (hosted) -Current runner version: '2.285.0' +Job is about to start running on the hosted runner: GitHub Actions 2 (hosted) +Current runner version: '2.285.1' ##[group]Operating System Microsoft Windows Server 2019 10.0.17763 @@ -10,9 +10,9 @@ ##[endgroup] ##[group]Virtual Environment Environment: windows-2019 -Version: 20211229.2 -Included Software: https://github.com/actions/virtual-environments/blob/win19/20211229.2/images/win/Windows2019-Readme.md -Image Release: https://github.com/actions/virtual-environments/releases/tag/win19%2F20211229.2 +Version: 20211207.2 +Included Software: https://github.com/actions/virtual-environments/blob/win19/20211207.2/images/win/Windows2019-Readme.md +Image Release: https://github.com/actions/virtual-environments/releases/tag/win19%2F20211207.2 ##[endgroup] ##[group]Virtual Environment Provisioner 1.0.0.0-master-20211123-1 @@ -38,4 +38,4 @@ Getting action download info Download action repository 'actions/checkout@v2' (SHA:ec3a7ce113134d7a93b817d10a8272cb61118579) Download action repository 'suisei-cn/actions-download-file@v1' (SHA:40e036cbce6bfe6f2500eebca35614bb89308bd3) -Download action repository 'actions/upload-artifact@v2' (SHA:27121b0bdffd731efa15d66772be8dc71245d074) +Download action repository 'actions/upload-artifact@v2' (SHA:da838ae9595ac94171fa2d4de5a2f117b3e7ac32) ```
nic11 commented 2 years ago

UPD

Looks like this is caused by the recent release. Reverted action to 27121b0 (v2.2.4), and it stopped failing.

Run: https://github.com/skyrim-multiplayer/skymp/runs/4490672804?check_suite_focus=true

See PR: https://github.com/skyrim-multiplayer/skymp/pull/630

sgvictorino commented 2 years ago

Thanks @nic11! Reverting to v2.2.4 worked for me too.

DmitrySharov commented 2 years ago

Thanks @nic11! Great job. It helped me too

nic11 commented 2 years ago

Turns out there was a separate issue for this, though I googled this one :)

Fixed in #281

NAJ8ry commented 1 year ago

I had this problem and the solution for me was to introduce a period of waiting.

      - name: Setup the database within Docker
        run: |
          echo 'Starting the db'
          docker-compose -f ./docker_files/docker-compose_dbOnly.yml -p mynameapp up -d
      - name: Sleep for 15 seconds
        run: sleep 15s
        shell: bash
      - name: Unit tests - pytest
        env:
          TEST_POSTGRES: "postgresql://username:password@localhost:5432/test_temp"
          AUTO_TEST: true;
        run: pytest
      - name: Upload artifact for deployment job
        uses: actions/upload-artifact@v2
        with:
          name: python-app
          path: .

The code was correct and worked prior to the introduction to docker. My guess is that it is linked to having a file locked during the compose that was not released quickly. This seems really odd to me but it now works.

konradpabjan commented 11 months ago

V4 upload-artifact has released today! Recommend switching over.

https://github.blog/changelog/2023-12-14-github-actions-artifacts-v4-is-now-generally-available/

v4 is a complete rewrite of the artifact actions with a new backend. v1-v3 uploads sometimes would hit 100% or close to 100% and things would just stop and fail due to mysterious reasons. Sometimes there would also be a small amount of transient errors like 500s or 400s that you see. v4 is all around more reliable, simpler and a host of issues described in this issue should no longer happen.

If there are any similar issues with v4 then please open up new issues