actions / upload-artifact

MIT License
3.12k stars 702 forks source link

Error: Create Artifact Container failed: Artifact storage quota has been hit. Unable to upload any new artifacts #307

Open RachelXGanon opened 2 years ago

RachelXGanon commented 2 years ago

What happened?

Hey, Getting this error as I used this action version uses: actions/upload-artifact@v2.2.3 More details:

Run actions/upload-artifact@v2.2.3
With the provided path, there will be 1 file uploaded
Create Artifact Container - Error is not retryable
##### Begin Diagnostic HTTP information #####
Status Code: 403
Status Message: Forbidden
Header Information: {
  "cache-control": "no-store,no-cache",
  "pragma": "no-cache",
  "content-length": "333",
  "content-type": "application/json; charset=utf-8",
  "strict-transport-security": "max-age=2592000",
  "x-tfs-processid": "838e5bfa-4062-4abc-9b4a-eb40822f8af9",
  "activityid":"..."
  "x-tfs-session": "6df9e104-1539-43ae-813d-3458802b68c8",
  "x-vss-e2eid": "6df9e104-1539-43ae-813d-3458802b68c8",
  "x-vss-senderdeploymentid": "...",
  "x-frame-options": "SAMEORIGIN",
  "x-cache": "CONFIG_NOCACHE",
  "x-msedge-ref": ...",
  "date": "Sun, 13 Mar 2022 05:42:46 GMT"
}
###### End Diagnostic HTTP information ######
##### Begin Diagnostic HTTP information #####
Status Code: 403
Status Message: Forbidden
Header Information: {
  "cache-control": "no-store,no-cache",
  "pragma": "no-cache",
  "content-length": "333",
  "content-type": "application/json; charset=utf-8",
  "strict-transport-security": "max-age=2592000",
  "x-tfs-processid": "838e5bfa-4062-4abc-9b4a-eb40822f8af9",
  "activityid": "6df9e104-1539-43ae-813d-3458802b68c8",
  "x-tfs-session": "6df9e104-1539-43ae-813d-3458802b68c8",
  "x-vss-e2eid": "6df9e104-1539-43ae-813d-3458802b68c8",
  "x-vss-senderdeploymentid": "1139089e-89fd-ce7a-6851-1ac6328a300e",
  "x-frame-options": "SAMEORIGIN",
  "x-cache": "CONFIG_NOCACHE",
  "x-msedge-ref": "Ref A: 867[14](...)Z",
  "date": "Sun, 13 Mar 2022 05:42:46 GMT"
}
###### End Diagnostic HTTP information ######
Error: Create Artifact Container failed: Artifact storage quota has been hit. Unable to upload any new artifacts

What did you expect to happen?

Up until now it worked fine, not sure why it stopped to work.

How can we reproduce it?

can't reproduce it's a private repository

Anything else we need to know?

No response

What version of the action are you using?

v2.2.3

What are your runner environments?

window

Are you on GitHub Enterprise Server? If so, what version?

No response

RobertDutkiewicz commented 2 years ago

Yesterday, I noticed this error in all scheduled night workflows. Today it doesn't occur anymore, I can't reproduce it

charusat09 commented 2 years ago

Yesterday, I noticed this error in all scheduled night workflows. Today it doesn't occur anymore, I can't reproduce it

@RobertDutkiewicz I can tell from experience that this will happen when you run so many Github Action jobs. It seems like there is a limit to Github Action job executions. I am also facing a similar issue and trying to fix it but so far I am not able to fix it. Thanks.

arati-mohanty commented 2 years ago

I am also facing this issue from yesterday. Any update on this?

RachelXGanon commented 2 years ago

IMO - it's what @charusat09 said - there's a limit and once you hit the limit you can't upload more artifacts. I had to delete some very old runs artifacts, about 3~ later, the issue resolved. But maybe is there a way to increase this limit?

dmitry-midokura commented 1 year ago

Could anyone please share the current limit so we can try to ensure it for our jobs?

alterem commented 1 year ago

How should I get him to work properly, I often have this problem

crawler-dev commented 1 year ago

Any new update? I have same issue today

OMally commented 1 year ago

Please guys, is there any resolution to this issue? I've also hit this problem and need to deploy modifications to production for a client.

danymendez commented 1 year ago

I have the same issue. I deleted all old run artifacts, but it seems like it didn't work

OMally commented 1 year ago

My issue cleared up after removing items from other repositories on my account. I deleted as much as I could and after no positive outcome, I left it for the night. When I came back to it the next day, it worked when I attempted a deployment.

It must be noted that the issue took a while to clear up and I can only assume that Github does recalculations on a scheduled basis... and then only allows further additions to your repositories. It isn't comforting when you have timeline pressure, but hopefully, this can be improved in the future.

kairui1108 commented 1 year ago

I have the same issue today. How to delete old artifacts?

Jedore commented 1 year ago

I have the same issue today. How to delete old artifacts?

Removing workflow artifacts

ADTC commented 1 year ago

Started having this since last week on an organization's private repository. We sparingly use artifact storage because we only run ad-hoc jobs (maybe once or twice a week).

I already deleted all the old artifacts the first time we got this error. I'm in touch with GitHub Support about this, but I'm not sure where the problem is.

I have checked by creating a repo scope token and running this API request (replacing <YOUR-TOKEN> and OWNER/REPO):

curl -L \
  -H "Accept: application/vnd.github+json" \
  -H "Authorization: Bearer <YOUR-TOKEN>"\
  -H "X-GitHub-Api-Version: 2022-11-28" \
  https://api.github.com/repos/OWNER/REPO/actions/artifacts

which shows the following:

{
  "total_count": 0,
  "artifacts": [

  ]
}

But still we're having the error upon re-runs of jobs and new jobs, even after re-running a day later. We also see the error 403 Forbidden in the diagnostic information:

##### Begin Diagnostic HTTP information #####
Status Code: 403
Status Message: Forbidden
Header Information: {
  "cache-control": "no-store,no-cache",
  "pragma": "no-cache",
  "content-length": "333",
  "content-type": "application/json; charset=utf-8",
  "strict-transport-security": "max-age=2592000",
  "x-tfs-processid": "..",
  "activityid": "..",
  "x-tfs-session": "..",
  "x-vss-e2eid": "..",
  "x-vss-senderdeploymentid": "..",
  "x-frame-options": "SAMEORIGIN",
  "x-cache": "CONFIG_NOCACHE",
  "x-msedge-ref": "Ref A: .. Ref B: .. Ref C: 2023-05-31T03:55:42Z",
  "date": "Wed, 31 May 2023 03:55:41 GMT",
  "connection": "close"
}
###### End Diagnostic HTTP information ######
Error: Create Artifact Container failed: Artifact storage quota has been hit. Unable to upload any new artifacts
##[debug]Node Action run completed with exit code 1

PS: For mission-critical artifacts, we use external storage like an AWS S3 bucket. I suggest you to consider it. The transparency of storage allocation and the control needed to manage the storage is far better than GitHub's artifact storage, which we only use for non-critical test artifacts.

Also if you use GitHub's storage, for the upload-artifact action, you can add retention-days option to override the setting to a lower (not higher) value.

OMally commented 1 year ago

@ADTC I wasn't being prescriptive and only communicated my steps to resolve the issue I experienced. Thanks for your explanations though.

nonesky666 commented 1 year ago

Haha

On Wed, 31 May 2023, 17:05 OMally, @.***> wrote:

@ADTC https://github.com/ADTC No further comment needed about this as I had deleted my post already, but you obviously did not see this.

— Reply to this email directly, view it on GitHub https://github.com/actions/upload-artifact/issues/307#issuecomment-1570413596, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXNMDTZ3OMRFSQGAVEOHXZDXI5M2HANCNFSM5QS4MV6Q . You are receiving this because you are subscribed to this thread.Message ID: @.***>

ADTC commented 1 year ago

FWIW, the solution in my case was for our organization owner to fix the billing settings. Apparently the payment attempt on the saved card on the account was declined and that the "Estimated Storage for the Month" was exceeded. Once the owner updated the card on the account, about two hours later we ran the job and the artifacts were correctly uploaded.

Although, I have a suspicion that the monthly limit could have also just reset because it is June 1 today.

I think the real problem here might be that a response as generic as 403 Forbidden is translated by the action or by GitHub to a more specific error Artifact storage quota has been hit when in fact it could have been something else, like a failure in billing due to declined card, or that the monthly quota (which may not be the same as storage quota) has been hit.

sgatade commented 1 year ago

a. There is STILL no easy way to see all your artifacts in a single view unless you use REST API.

b. There is STILL no easy process to delete existing artifacts, unless you go to each every run and manually delete OR add a delete artifacts job to each workflow.

b. It's frustrating that even after deleting the artifacts (manually or via Delete job), we still have to run workflows after some interval and wait to see if they succeed or fail.

Probably we are going to end up using Azure Blob Storage actions instead of github/up/download i.e. move all the storage to cloud.

wouterdoublebyte commented 1 year ago

We have done the following:

  1. Artifact & Log retention in project set to 1 day.
  2. retention-days: 1 set in the action
  3. Cleanup for every workflow with geekyeggo/delete-artifact@v2.

This results in 0 artifacts after 1 day. It could be possible that the workflow fails and the cleanup action is not triggered, but then we still have artifacts for a maximum of 1 day.

Heres the kicker:

We have not triggered a workflow for two weeks, and now we suddenly hit the quota on the first try. We do not have any old artifacts (we deleted all of them manually), and obviously not any new ones.

Why are we hitting this quota?

SamHerts commented 9 months ago

I am still seeing this issue as well, with similar settings as @wouterdoublebyte.

  1. Artifact & Log retention in project set to 1 day.
  2. retention-days: 1 set in the action
  3. Cleanup for every workflow with artifact deletion script.

Jobs were running successfully for a week, then this error occurs again out of the blue.