Closed fclante closed 2 months ago
Does it affect GitHub, though? I'm pretty sure their network is excluded from that.
Anyway, #230 is going to address this.
Does it affect GitHub, though? I'm pretty sure their network is excluded from that.
Anyway, #230 is going to address this.
We're using self-hosted runners in my company and have 350+ developers and approximately 1000+ repositories. Anyway I've forked the repo for a short term fix on our side, but the problem remains and seeing that this #230 Has been open since April 19 I don't expect it to be implemented anytime soon ;-). I just wanted to let you know.
btw. Amazon ECR also has rate limits, so we ultimately decided to cache our most used images in acr.io .
Closing - Is being addressed in #230 as per @webknjaz's comment.
Thanks for the info. So it sounds like you're hitting the limits because you're not on GitHub's network. The action optimizes for the community. The fact that enterprise can use it too is rather a side effect. I'm not sure how to improve this in a sustainable way. Perhaps, in the future, when that PR goes in, we could grow ability to point to a custom registry as a cache, which might be even useful for the folks behind firewalls. Though in general, I think that enterprise might be better off having forks that fit their limitations.
Docker have implemented rate limiting on pulls, this results in failing builds with errors like: message": "You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit " then this is the reason. A quick fix is to use Amazon ECR - you can find it here: https://gallery.ecr.aws/ in most cases you can simply prefix your image-refs with public.ecr.aws/docker/library.