Closed cwolf24 closed 1 year ago
Hi @cwolf24 - we already do retries, so these would be failures after retries. Is it always occurring on the same file?
@sethvargo Unfortunately it's hard to say if it is related to a specific file, because the files (allure testreport) are created dynamically beforehand...
and as far as i can see it happens - when it happens - with one of the *.json files from a subfolder (allure-report/test-integration/data/test-cases/939ed1e9943e7b51.json
always after 2:11 minutes...
i just created an allure report locally and it doesn't look like there are big files or broken files or anything.
Can i somehow enable debugging output or something else that may help analysing this issue?
@sethvargo I just did re-run a failed upload reports job and now it failed after a few seconds with:
Error: google-github-actions/upload-cloud-storage failed with: retry limit exceeded -
node:internal/process/promises:279
triggerUncaughtException(err, true /* fromPromise */);
^
Error: Retry limit exceeded -
at Upload.attemptDelayedRetry (/home/runner/work/_actions/google-github-actions/upload-cloud-storage/v1/dist/index.js:139:10919)
at Upload.onResponse (/home/runner/work/_actions/google-github-actions/upload-cloud-storage/v1/dist/index.js:139:10345)
at Upload.makeRequestStream (/home/runner/work/_actions/google-github-actions/upload-cloud-storage/v1/dist/index.js:139:10142)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Upload.startUploading (/home/runner/work/_actions/google-github-actions/upload-cloud-storage/v1/dist/index.js:139:7703)
Is this maybe related?
Hi there - here are instructions for enabling the debug output for the complete GitHub Actions workflow run.
Is this using GitHub Actions managed runners or self-hosted?
What is the total size of the upload? I think GitHub imposes bandwidth limits (although I cannot find any supporting documentation).
Hi @sethvargo ,
thanks for the info, i now could optimize our upload by splitting it for single testruns, so that not all files will be uploaded at the same time. So this issue can be closed. Thanks again for your support!
TL;DR
Hello,
unfortunately i sometimes get an uncaught exception while uploading a testreport (~ 500 files) See log output for exception details. As i see in the log, that several files are uploaded before the exception appears. It happens from time to time. Maybe 1/10 tries.
Do you know if it is possible to enable a retry or handle the exception within your package? That would be awesome to have this Job always pass. š
Expected behavior
uploading files always works
Observed behavior
upload is not successful due to the exception
Action YAML
Log output