Closed emakarov closed 3 years ago
I'm observing this issue as well with bitbucket pipelines
Error response from daemon: login attempt to https://registry-1.docker.io/v2/ failed with status: 429 Too Many Requests
error parsing HTTP 429 response body: invalid character 'T' looking for beginning of value: "Too Many Requests (HAP429).\n"
Error response from daemon: login attempt to https://registry-1.docker.io/v2/ failed with status: 401 Unauthorized
After pipeline is rerun it mostly works
I observed the same issue in a VM, the solution was run docker login
and problem solved.
I observed the same issue in a VM, the solution was run
docker login
and problem solved.
I am having the same problem with Docker Hub/Bitbucket.
I already use docker login
, and I still get 429s and 401s occasionally. Any ideas?
I get this as well. CI / CD things have been working fine until they just suddenly today started complaining in different steps in the automated docker-builds. It fails in different ways as well:
1)
docker login --username ##### --password-stdin
aws ssm get-parameter --name ##### --region eu-west-1 --with-decryption --query Parameter.Value --output text
Error response from daemon: login attempt to https://registry-1.docker.io/v2/ failed with status: 401 Unauthorized
2)
...
b974a41748f2: Preparing
77cae8ab23bf: Preparing
934e146dd947: Waiting
91f3f39f53e6: Waiting
db99feaf0656: Waiting
2c174a73edf0: Waiting
b974a41748f2: Waiting
eb95ad9291f6: Waiting
77cae8ab23bf: Waiting
f98028956066: Waiting
error parsing HTTP 429 response body: invalid character 'T' looking for beginning of value: "Too Many Requests (HAP429).\n"
3)
...
77cae8ab23bf: Waiting
db99feaf0656: Waiting
2c174a73edf0: Waiting
b974a41748f2: Waiting
eb95ad9291f6: Waiting
f98028956066: Waiting
unauthorized: authentication required
4)
934e146dd947: Pushed
2c174a73edf0: Pushed
77cae8ab23bf: Pushed
db99feaf0656: Pushed
error parsing HTTP 429 response body: unexpected end of JSON input: ""
We do use docker login before build / push and it works successfully (except in case 1 where the error msg actually comes from doing docker login). The exact same script is running each time. It succeeds sometimes as well.
Our scripts are invoked from bitbucket pipelines.
The more people use bitbucket pipelines the more docker hub will throttle the requests,
From docker login
to docker push
, they are failing more frequently now. 4/10 pipelines are affected and it feels like an increasing rate
Same issue here - causing us some pain for sure.
+1 we are seeing this too but not from bitbucket pipelines, from our own CI (concourse in this case). Seems like everything is being rate limited?
We are in touch with Bitbucket to help them fix the issue.
@cowsrule thanks for update. Is there any ETA for resolution of this issue?
We have the same issue with GitLab
We are having the same issue with Gitlab Ultimate (On-Prem)
@paulcwarren @lordgnu Could you post your IPs? I doubt we would be rate limiting from custom CI envs unless it is making too many requests at the same time. If the IP is sensitive please email to support@docker.com and refer this thread.
It would really help if someone that is constantly getting rate limited can run following script and print the output?
#!/bin/sh
# Call it as ./script.sh library/golang blobs/sha256:f332e1ccc3d895f13e9660c053e8bda16c4894c5179526cf7702ad014cd5fa88
repo=$1
url=$2
token=$(curl "https://auth.docker.io/token?service=registry-1-stage.docker.io&scope=repository:$repo:pull" | jq -r .token)
curl https://registry-1.docker.io/v2/$repo/$url -v -H "Authorization: Bearer $token" -L -H "Accept: application/vnd.docker.distribution.manifest.v2+json" "$@" > /dev/null
issue has stopped in our servers, looks like pulling from docker hub went back to normal. no configuration change in our end
We have temporarily increased the rate limit to unblock Bitbucket customers while they sort the issue out on their end.
@manishtomar I don't have access to the CI VMs (its hosted) but I could run it on one of the failed build containers, if that would be helpful? That said, after the API rate limit, all is green so I guess there is no point running it right at this second.
Thanks @paulcwarren . I was concerned that non-bitbucket users were getting incorrectly rate-limited but since you are able to use the service after we increased our rate limit it looks like that is not the case.
Issue with bitbucket is still there :( @manishtomar any chance you can increase rate limits for bitbucket users?
Yeah, we're still experiencing rate limiting issues in our bitbucket pipelines
We are in touch with bitbucket to work this through. Will give an update once things are sorted out.
@manishtomar - any update of note for the last few days?
EDIT: Hopefully someone who can address the problem has realized that this is just resulting in builds getting retried, increasing the amount of traffic they were wanting to limit.
Edit2: This isn't just CI builds, I had to hit the retry step button for like 4 hours to get a release build to upload.
We are still facing rate limit issue on bitbucket pipelines, can't push image registry to docker hub.
Also - seeing this issue with the Bitbucket Pipelines
This is something really frustrating. Random CI failures... due to docker hub.
Hey @manishtomar Is there a plan to address folks whose requests don't originate from a hosted CI like Bitbucket ? Lots of Concourse users are experiencing this problem as well and they typically host their own environments.
Docker is continuing to work in partnership with Bitbucket to assist Bitbucket in fixing the issue. In the interim we have provided a workaround to Bitbucket to ensure that your CI continues to run as expected. In general if you see rate limiting issues, please reach out to your service provider's support. We are evaluating longer term solutions to this problem.
If you are an individual user that is seeing rate limits applied to you, please consider reducing your usage by running a local pull-through cache. If you are already doing so, please open a new issue in the repo that includes details of your scenario and your IP addresses. Our rate limit is quite high and we do not expect that normal usage of Docker Hub would come close to hitting it.
Hey @cowsrule, Would you be able to share details of the workaround for those of us that are operating our own CI environments ?
we had to move majority of our pipelines to another container registry. issue was mission-critical and we are still using bitbucket
@xtreme-sameer-vohra can you share your IP address? You may be receiving 429s due to overall request rate limiting (HAP429) or due to failed login attempts. Please check the APIs and responses that you are being rate limited on. The workaround is not general enough to be shared, however we would like to understand and unblock your scenario. My email is in my profile if you cannot share your IP publicly.
@cowsrule Hi Grant, I'm on the same team as @xtreme-sameer-vohra. I just reached out to you over email with our IP. Thanks for reaching out to help.
I am getting same issue in the play with docker environment - error parsing HTTP 429 response body: invalid character 'T' looking for beginning of value: "Too Many Requests (HAP429).\n". What rate limitation we are talking about ?
Could you please make transparent after how many requests from the same IP the IP is limited or blocked?
I am in a big company and more and more people use docker through one IP. And also our daily job that mirrors required images/tags with lstags (to save bandwith) fails constantly since beginning of december. We reduced the parallelism of requests from 16 to 1 and increased the time between requests from 0 to 4 seconds in lstags but job never finished since then.
Concur with @nudgegoonies . @pkennedyr Would be quite helpful if we know the rate limiting threshold.
I am in a big company too and more people uses docker through one IP. Since 15.05.20 each second build fails due to thie issue.
@manishtomar thanks for sharing the info.
Hi everyone - let me add a little more background to help make sense of this.
If you receive the response "Too Many Requests (HAP429)" (and just that), you are hitting our "overall" rate limits that are set very, very high. These limits exist to keep our infrastructure from getting DoS'd and things like that. We don't publish the exact limit because it varies over time depending on things like load, but it's in the neighborhood of thousands of requests per minute. This limit applies to all requests of any type coming into our infrastructure (pull, push, web page, API, etc).
If you're hitting this, we highly recommend looking at all of your requests and figuring out how to reduce them. For example, maybe you have a service scraping a Hub API faster than you intended. If you're behind a NAT and hitting this with normal usage (should be rare), we suggest using multiple NAT gateways/IPs or taking advantage of the built-in proxy tools in Docker: https://docs.docker.com/engine/reference/commandline/pull/#proxy-configuration
If you receive the response "Too Many Requests. Please see https://docs.docker.com/docker-hub/download-rate-limit/", this is a rate limit specifically for pull request downloads. You can see the full details at that link, but the simplest fix is to authenticate your pulls.
@binman-docker I understand your point. In a big Company as I'm working for, we are working behind a Proxy. So all requests to docker.io are done with the same IP address. As we have large amount of build servers, it's not managable to authenticale all build servers against docker.io. What will happen, if only a part of our build server are authenticated? I'm afraid, due to the same IP address we need to authenticate all build server aginst docker.io to lever out the request restriction. I'm right? Is there are posibility for temporary whitelisting of IP addresses of big Companies? FYI: the long term solution will be implemented by us. e.g Docker-Registry on Azure and mirror to docker.io . What do you think about this proposal?
@georg-koch I strongly recommend you to use an internal mirror. We have a curated big list of remote repositories and their images that we sync to our internal mirror. Everyone can make MRs.
We use lstags for mirroring them. The job runs when an image is added and then only mirrors this image to reduce requests. But the job has to run daily to mirror all moving tags too. Too bad that there is no automatic criterion to differ between stable and moving tags. For that reason we have to check all images once a day. Otherwise we could only sync moving stabs after a stable one was mirrored. Altough lstags only downloads layers that are not available in our local registry yet we had to add big lstags delays and remove parallelism completely between all the requests from lstags to docker hub so our daily job runs very long. A lot of these metadata requests are usesless on stable tags that are really stable.
But as in your company there is a lot of traffic from our company ip. And i cannot speak of other divisions/teams if they use our conservative lstags-approach too or if their build systems pull docker hub on every job. In big companies it is typically hard to find out who to blame for all the broken builds. At least for our division we shifted all these "429/Too many Requests" to our mirror/sync job that we sometimes have to retry.
We're facing this error since last friday. We have a private docker registry in dockerhub. We run our own CI/CD environments and our application also pulls docker images. But our pull requests are nowhere near thousands of requests per minute.
Any help? @cowsrule @binman-docker
@cowsrule @binman-docker - We are also facing similar issue, we hold private account in docker hub, recently we faced 429 too many requests error during image transactions (pull/push) with docker-hub. It will be great if you provide some work around to get rid of this error. It affected our production and eventually our clients. It's quite critical and urgent. Please help us .
I'm getting the same error in AWS CodeBuild now.
@binman-docker
Nothing has (yet) changed since my response a few comments up, please refer to that.
Some changes will be coming November 1st, please see these links for additional details:
https://www.docker.com/blog/scaling-docker-to-serve-millions-more-developers-network-egress/ https://www.docker.com/pricing/resource-consumption-updates
There's also an email address listed there for those with specific questions or unique situations.
@binman-docker - I tried your suggestion about authenticating first, but I'm still getting the same error. What more can I do to determine if my requests are being throttled by IP or by my account? I also work for a large software development firm, but I'm on the technical sales side of our business - not one of our actual developers. I just need to know enough information to go back to my leadership and request a paid account or some other recourse, but the rate-limit TOS doc doesn't provide enough details for this. Any help is greatly appreciated. Thanks!
Hey @cragginstylie - what's the full error code? That will help us determine which limit you are hitting.
To be able to better handle this error on the client side it would help if your rate limiter returned a valid HTTP response. Currently you use \n
as line terminator but according to the HTTP specification \r\n
must be used.
Your response is:
HTTP/1.1 429 Too Many Requests\nCache-Control: no-cache\nConnection: close\nContent-Type: text/plain\nRetry-After: 60\n\nToo Many Requests (HAP429).\n
Your response should be:
HTTP/1.1 429 Too Many Requests\r\nCache-Control: no-cache\r\nConnection: close\r\nContent-Type: text/plain\r\nRetry-After: 60\r\n\r\nToo Many Requests (HAP429).\n
Hey @cragginstylie - what's the full error code? That will help us determine which limit you are hitting.
This is from a cli push to my repo:
error parsing HTTP 429 response body: invalid character 'T' looking for beginning of value: "Too Many Requests (HAP429).\n"
Which happens even after I authenticate successfully.
error parsing HTTP 429 response body: invalid character 'T' looking for beginning of value: "Too Many Requests (HAP429).\n"
Yeah that's the overall rate limit: https://docs.docker.com/docker-hub/download-rate-limit/#other-limits
The short version is that this limit is on the order of thousands of requests per minute. Generally we find this happens with automation gone awry (suggested solution: find and fix it) or with a lot of users accessing Hub behind a single IP (suggested solution: set up a mirror, use more IPs, etc).
If you find all of this usage is legitimate, we are considering adjusting the overall limit and would love feedback as to where it should be. What percent of your requests fail and how frequently?
@binman-docker the usage is legitimate. I work for a large software manufacturer with both in-house development as well as sales organizations. We have lab environments for both dev & sales, all of which are behind NAT'd & firewalled network segments. There are several hundred of us sales engineers that perform demos and workshops for our customers. The dev teams also run their side of the business the same.
I really do appreciate your thoughtfulness around asking my opinion, but I don't think I can adequately advise on what the rate limits should be, as I have no clue as to the "usage" coming from our various gateway IP addresses across my entire corporation.
As far as what percent requests fail? Right now - failing 100%. Before I experienced the first such failure? 100% success. It appears that once rate limiting was introduced, I've not had any successful requests.
The recourse seems to be standing up my own image repository, and push image builds there instead of the public repo.
Launching bitbucket pipeline, we currently receive such error
error parsing HTTP 429 response body: invalid character 'T' looking for beginning of value: "Too Many Requests (HAP429).\n"
and we definitely don't make too many requests. How to resolve the issue?