microsoft / containerregistry

Microsoft Artifact Registry description and related FAQ
398 stars 89 forks source link

`429 Too Many Requests` from GitHub Actions, can't build anything #140

Open clemlesne opened 1 year ago

clemlesne commented 1 year ago

Receiving error 429 Too Many Requests from two hours. Pulling mcr.microsoft.com/dotnet/aspnet:6.0-jammy.

Date: June 7, 2023, 6:00 PM

Short error:

Error: buildx failed with: ERROR: failed to solve: failed to compute cache key: failed to copy: httpReadSeeker: failed open: unexpected status code https://mcr.microsoft.com/v2/dotnet/aspnet/blobs/sha256:d3b756117bfc66a1c14dcc282f851891ef89aae8f856f4104f5eead552a718f1: 429 Too Many Requests

Long error:

ERROR: failed to solve: failed to compute cache key: failed to copy: httpReadSeeker: failed open: unexpected status code https://mcr.microsoft.com/v2/dotnet/aspnet/blobs/sha256:d3b756117bfc66a1c14dcc282f851891ef89aae8f856f4104f5eead552a718f1: 429 Too Many Requests
Error: buildx failed with: ERROR: failed to solve: failed to compute cache key: failed to copy: httpReadSeeker: failed open: unexpected status code https://mcr.microsoft.com/v2/dotnet/aspnet/blobs/sha256:d3b756117bfc66a1c14dcc282f851891ef89aae8f856f4104f5eead552a718f1: 429 Too Many Requests
ptr727 commented 1 year ago

Similar issue here, running a matrix, build fails, getting error 429:

ERROR: failed to solve: mcr.microsoft.com/dotnet/sdk:7.0-alpine: failed to copy: httpReadSeeker: failed open: unexpected status code https://mcr.microsoft.com/v2/dotnet/sdk/manifests/sha256:9efa4cb38fb3b957595b4dd60a028044a1f7750d058405ab428153c3aa30ec01: 429 Too Many Requests
Error: buildx failed with: ERROR: failed to solve: mcr.microsoft.com/dotnet/sdk:7.0-alpine: failed to copy: httpReadSeeker: failed open: unexpected status code https://mcr.microsoft.com/v2/dotnet/sdk/manifests/sha256:9efa4cb38fb3b957595b4dd60a028044a1f7750d058405ab428153c3aa30ec01: 429 Too Many Requests
AndreHamilton-MSFT commented 1 year ago

@clemlesne we are actively working to reduce some of the throttling you have been noticing and will update once we have more

benmccallum commented 11 months ago

We're getting this too.

Unfortunately docker caching with the gha (GitHub actions cache) is just not feasible with GHA's 10GB cache limit and Docker images with layers that are several GBs.

Patiently waiting for the S3-backed caching and hoping we can make do for now. 🙏

clemlesne commented 10 months ago

Builds are still failing regularly, from my managed GitHub Actions.

telia-ankita commented 8 months ago

Hi Team, We are also facing same issue. Did anyone get the solution?

AndreHamilton-MSFT commented 5 months ago

We are working on ways to reduce the likelihood of this occuring.

perSjov commented 5 months ago

Still happening today

bnneupart commented 5 months ago

We also got build fails because of this...

alensindicic commented 5 months ago

This is still an issue.

ptr727 commented 5 months ago

Also getting buildx failed with: ERROR: failed to solve: error writing layer blob: maximum timeout reached

I resorted to using max-parallel: 4 in the matrix strategy, builds now take forever, but seem to be more reliable.

boukeversteegh commented 5 months ago

Hello, our build pipelines are all blocked due to 429's on the container registry

az webapp deployment slot swap --slot staging --name **** --resource-group ****

Warning: Unable to fetch all az cli versions, please report it as an issue on https://github.com/Azure/CLI/issues. Output: Ref A: 4237CA5349C14533AAC450D8F8CB4763 Ref B: DM2EDGE0507 Ref C: 2024-05-08T08:31:59Z , Error: SyntaxError: Unexpected token 'R', "Ref A: 423"... is not valid JSON Starting script execution via docker image mcr.microsoft.com/azure-cli:2.59.0 Error: Error: Unable to find image 'mcr.microsoft.com/azure-cli:2.59.0' locally docker: Error response from daemon: error parsing HTTP 429 response body: invalid character 'R' looking for beginning of value: "Ref A: 34A2CDDED8114A60AB26872384E74885 Ref B: DM2EDGE0912 Ref C: 2024-05-08T08:31:59Z".

ameya-karbonhq commented 5 months ago

Hello, We are also experiencing the same and blocking our deployments.

Starting script execution via docker image mcr.microsoft.com/azure-cli:2.55.0
Error: Error: Unable to find image 'mcr.microsoft.com/azure-cli:2.55.0' locally
2.55.0: Pulling from azure-cli
96526aa774ef: Pulling fs layer
430548f4d4bf: Pulling fs layer
9ae8a48eae03: Pulling fs layer
2d30bba99930: Pulling fs layer
3d288dfecc47: Pulling fs layer
2a58a5c1116a: Pulling fs layer
4f4fb700ef54: Pulling fs layer
2d30bba99930: Waiting
3d288dfecc47: Waiting
2a58a5c1116a: Waiting
4f4fb700ef54: Waiting
docker: error pulling image configuration: download failed after attempts=1: error parsing HTTP 429 response body: invalid character 'R' looking for beginning of value: "Ref A: 657DA19E112344D5A432995949A2CA40 Ref B: DM2EDGE1016 Ref C: 2024-05-08T08:36:29Z".
See 'docker run --help'.

cleaning up container...
grzesuav commented 5 months ago

Observed the same issue from AKS cluster - https://github.com/Azure/AKS/issues/4279

lfraile commented 5 months ago

Hello, one more here with errors 429 :(

grzesuav commented 5 months ago

In the AKS Issue I created they admit there is a throttling issue in centralus region

osbash commented 5 months ago

I am experiencing the same thing. This needs to be resolved immediately as its completely put a stop to our deployments.

jay-pawar-mastery commented 5 months ago

I am seeing ERROR: pulling from host mcr.microsoft.com failed with status code [manifests 8.0]: 429 Too Many Requests as well.

dejancg commented 5 months ago

Another one here, since today.

lol768 commented 5 months ago

Also experiencing this :(

vitalykarasik commented 5 months ago

It seems that caching should be an answer to this issue, but I'm not sure which method I should use in my Github workflow. I'm using mcr.microsoft.com for both "dotnet test" and as a base image for my .NET build. I.e.

docker run --rm -v $(pwd):/app -w /app mcr.microsoft.com/dotnet/sdk:6.0 dotnet test testdir and docker build .

da-zu commented 5 months ago

Me too, our CI/CD pipelines are failing randomly due to 429. Please fix ASAP! :)

codehunter13 commented 5 months ago

same problem over here. Pipelines are often failing. with this error

npiskarev commented 5 months ago

ERROR: failed to solve: mcr.microsoft.com/dotnet/aspnet:8.0: pulling from host mcr.microsoft.com failed with status code [manifests 8.0]: 429 Too Many Requests

same issue happened right now

mentallabyrinth commented 5 months ago

Using Azure Pipelines, experiencing the same issue:

#3 [internal] load metadata for mcr.microsoft.com/dotnet/aspnet:8.0
#3 ERROR: pulling from host mcr.microsoft.com failed with status code [manifests 8.0]: 429 Too Many Requests
enf0rc3 commented 5 months ago

Using Github Actions also having this issue intermittently, we have about 5 docker containers being built in one Github action (same image), what is the best approach to cache?

   2 | >>> FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build-env
   3 |     WORKDIR /app
   4 |     
--------------------
ERROR: failed to solve: mcr.microsoft.com/dotnet/sdk:8.0: pulling from host mcr.microsoft.com failed with status code [manifests 8.0]: 429 Too Many Requests
AndreHamilton-MSFT commented 5 months ago

We are in the process of rolling out new hardware that will resolve this issue. We are already seeing decreased throttling in the last few hours. It will take a while to roll this out globally, but you should already be seeing improvements if you are located in the central us regions

lol768 commented 5 months ago

Many thanks for acknowledging, and for the detail on plans to resolve @AndreHamilton-MSFT

lfraile commented 5 months ago

Thank you @AndreHamilton-MSFT !!

stevef51 commented 5 months ago

So we use Azure Devops for our CI builds. We have seen this issue over the last several weeks and I am still getting builds issues today. At the moment I am having to manually re-run builds and cross fingers that I don't get the Too Many Requests error, almost seems like a 50:50 on any given build

image

@AndreHamilton-MSFT when you say "should be seeing improvements if you are located in the central us regions" - I am guessing that is referring to the location of our build machines, we are using Azure VM's in Central US.

AndreHamilton-MSFT commented 5 months ago

@stevef51 correct. We are still in the processing of rolling out the new hardware and i suspect your traffic landed on older hardware more prone to throttling. Going to see if i can make some further tweaks to reduce overall throttling likelihood. until we roll out globally you may still see some throttling, but we are working to roll that out as quickly and safely as possible.

bsripuram commented 4 months ago

@AndreHamilton-MSFT , We are still facing this issue, may i know the ETA by when you complete the rollout globally?

AndreHamilton-MSFT commented 3 months ago

@bsripuram we are mostly rolled out. How are things looking in the last week