Open clemlesne opened 1 year ago
Similar issue here, running a matrix, build fails, getting error 429:
ERROR: failed to solve: mcr.microsoft.com/dotnet/sdk:7.0-alpine: failed to copy: httpReadSeeker: failed open: unexpected status code https://mcr.microsoft.com/v2/dotnet/sdk/manifests/sha256:9efa4cb38fb3b957595b4dd60a028044a1f7750d058405ab428153c3aa30ec01: 429 Too Many Requests
Error: buildx failed with: ERROR: failed to solve: mcr.microsoft.com/dotnet/sdk:7.0-alpine: failed to copy: httpReadSeeker: failed open: unexpected status code https://mcr.microsoft.com/v2/dotnet/sdk/manifests/sha256:9efa4cb38fb3b957595b4dd60a028044a1f7750d058405ab428153c3aa30ec01: 429 Too Many Requests
@clemlesne we are actively working to reduce some of the throttling you have been noticing and will update once we have more
We're getting this too.
Unfortunately docker caching with the gha (GitHub actions cache) is just not feasible with GHA's 10GB cache limit and Docker images with layers that are several GBs.
Patiently waiting for the S3-backed caching and hoping we can make do for now. 🙏
Builds are still failing regularly, from my managed GitHub Actions.
Hi Team, We are also facing same issue. Did anyone get the solution?
We are working on ways to reduce the likelihood of this occuring.
Still happening today
We also got build fails because of this...
This is still an issue.
Also getting buildx failed with: ERROR: failed to solve: error writing layer blob: maximum timeout reached
I resorted to using max-parallel: 4
in the matrix
strategy, builds now take forever, but seem to be more reliable.
Hello, our build pipelines are all blocked due to 429's on the container registry
az webapp deployment slot swap --slot staging --name **** --resource-group ****
Warning: Unable to fetch all az cli versions, please report it as an issue on https://github.com/Azure/CLI/issues. Output: Ref A: 4237CA5349C14533AAC450D8F8CB4763 Ref B: DM2EDGE0507 Ref C: 2024-05-08T08:31:59Z , Error: SyntaxError: Unexpected token 'R', "Ref A: 423"... is not valid JSON Starting script execution via docker image mcr.microsoft.com/azure-cli:2.59.0 Error: Error: Unable to find image 'mcr.microsoft.com/azure-cli:2.59.0' locally docker: Error response from daemon: error parsing HTTP 429 response body: invalid character 'R' looking for beginning of value: "Ref A: 34A2CDDED8114A60AB26872384E74885 Ref B: DM2EDGE0912 Ref C: 2024-05-08T08:31:59Z".
Hello, We are also experiencing the same and blocking our deployments.
Starting script execution via docker image mcr.microsoft.com/azure-cli:2.55.0
Error: Error: Unable to find image 'mcr.microsoft.com/azure-cli:2.55.0' locally
2.55.0: Pulling from azure-cli
96526aa774ef: Pulling fs layer
430548f4d4bf: Pulling fs layer
9ae8a48eae03: Pulling fs layer
2d30bba99930: Pulling fs layer
3d288dfecc47: Pulling fs layer
2a58a5c1116a: Pulling fs layer
4f4fb700ef54: Pulling fs layer
2d30bba99930: Waiting
3d288dfecc47: Waiting
2a58a5c1116a: Waiting
4f4fb700ef54: Waiting
docker: error pulling image configuration: download failed after attempts=1: error parsing HTTP 429 response body: invalid character 'R' looking for beginning of value: "Ref A: 657DA19E112344D5A432995949A2CA40 Ref B: DM2EDGE1016 Ref C: 2024-05-08T08:36:29Z".
See 'docker run --help'.
cleaning up container...
Observed the same issue from AKS cluster - https://github.com/Azure/AKS/issues/4279
Hello, one more here with errors 429 :(
In the AKS Issue I created they admit there is a throttling issue in centralus region
I am experiencing the same thing. This needs to be resolved immediately as its completely put a stop to our deployments.
I am seeing ERROR: pulling from host mcr.microsoft.com failed with status code [manifests 8.0]: 429 Too Many Requests
as well.
Another one here, since today.
Also experiencing this :(
It seems that caching should be an answer to this issue, but I'm not sure which method I should use in my Github workflow. I'm using mcr.microsoft.com for both "dotnet test" and as a base image for my .NET build. I.e.
docker run --rm -v $(pwd):/app -w /app mcr.microsoft.com/dotnet/sdk:6.0 dotnet test testdir
and
docker build .
Me too, our CI/CD pipelines are failing randomly due to 429. Please fix ASAP! :)
same problem over here. Pipelines are often failing. with this error
ERROR: failed to solve: mcr.microsoft.com/dotnet/aspnet:8.0: pulling from host mcr.microsoft.com failed with status code [manifests 8.0]: 429 Too Many Requests
same issue happened right now
Using Azure Pipelines, experiencing the same issue:
#3 [internal] load metadata for mcr.microsoft.com/dotnet/aspnet:8.0
#3 ERROR: pulling from host mcr.microsoft.com failed with status code [manifests 8.0]: 429 Too Many Requests
Using Github Actions also having this issue intermittently, we have about 5 docker containers being built in one Github action (same image), what is the best approach to cache?
2 | >>> FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build-env
3 | WORKDIR /app
4 |
--------------------
ERROR: failed to solve: mcr.microsoft.com/dotnet/sdk:8.0: pulling from host mcr.microsoft.com failed with status code [manifests 8.0]: 429 Too Many Requests
We are in the process of rolling out new hardware that will resolve this issue. We are already seeing decreased throttling in the last few hours. It will take a while to roll this out globally, but you should already be seeing improvements if you are located in the central us regions
Many thanks for acknowledging, and for the detail on plans to resolve @AndreHamilton-MSFT
Thank you @AndreHamilton-MSFT !!
So we use Azure Devops for our CI builds. We have seen this issue over the last several weeks and I am still getting builds issues today. At the moment I am having to manually re-run builds and cross fingers that I don't get the Too Many Requests error, almost seems like a 50:50 on any given build
@AndreHamilton-MSFT when you say "should be seeing improvements if you are located in the central us regions" - I am guessing that is referring to the location of our build machines, we are using Azure VM's in Central US.
@stevef51 correct. We are still in the processing of rolling out the new hardware and i suspect your traffic landed on older hardware more prone to throttling. Going to see if i can make some further tweaks to reduce overall throttling likelihood. until we roll out globally you may still see some throttling, but we are working to roll that out as quickly and safely as possible.
@AndreHamilton-MSFT , We are still facing this issue, may i know the ETA by when you complete the rollout globally?
@bsripuram we are mostly rolled out. How are things looking in the last week
Receiving error
429 Too Many Requests
from two hours. Pullingmcr.microsoft.com/dotnet/aspnet:6.0-jammy
.Date: June 7, 2023, 6:00 PM
Short error:
Long error: