Open nitrocode opened 3 years ago
That's a really good call, we'll try and prioritise this as something to add. We've been thinking about ways to help customers with the new docker limits.
@nitrocode I'm curious what values would be a good fit for this new parameter?
I'm not aware of an AWS hosted mirror of docker hub, but I guess you could still use the GCP one (mirror.gcr.io
) from AWS? or maybe you're keen on self-hosting a pull through cache of docker hub?
These values could be used
{
"registry-mirrors": [
"https://mirror.gcr.io",
"https://https://gallery.ecr.aws/"
]
}
We were weighing the benefits between using gcp or aws mirrors or self hosting.
I can certainly see situations where using mirror.gcr.io
or a self-hosted pull through cache would be useful. I'm not sure that gallery.ecr.aws
(or public.ecr.aws
) would make sense in this option though - they don't claim to be a mirror and I think the images on public.ecr.aws
could in theory vary from the ones on docker hub?
So far it seems like very few of our users are having issues with the new docker hub rate limits and the docker registry ecosystem is in such a state of flux, I think at this stage we're keen to wait a bit to see how the dust settles.
My bet is that with the launch of public.ecr.aws
, it'll become much more common for everyone to use fully qualified images (public.ecr.aws/datadog/agent:7
or gcr.io/heptio-images/contour:v0.12.1
instead of datadog/agent:7
) and docker hub will become much less important.
If you need to use a registry mirror in the meantime, we'd recommend using a bootstrap script to modify daemon.json
.
So far it seems like very few of our users are having issues with the new docker hub rate limits and the docker registry ecosystem is in such a state of flux, I think at this stage we're keen to wait a bit to see how the dust settles.
I'd like to add a vote for this feature. We pull a huge amount of images from Docker Hub daily, and as a mitigation for the recent caps, have recently started running an in-house Docker pull-through cache. Works great especially since our private images are all stored on Docker Hub, too. (Although ECR is a thing we're thinking about for medium term)
We're currently using our own customised AMIs for Buildkite builders, but i'm keen to investigate switching over to the vanilla Elastic AWS you provide – so it'd be nice to have a first class way to specify options to /etc/docker/daemon.json
as @nitrocode is suggesting 👍
Hi @toothbrush, do you think this something you’d be able to set using the BootstrapScriptUrl to customise the docker config?
Yep! That would solve this particular case. And to be fair we'd likely still build our own AMI but based on Buildkite's instead of from-scratch, i just wanted to mention there's more than zero of us out here wanting the possibility to mess with /etc/docker/daemon.json
specifically :).
It would be nice to set the
registry-mirrors
key in /etc/docker/daemon.json using a cloud formation param to use a custom pull through cache mirror to avoid the recent anonymous docker pulls.https://cloud.google.com/container-registry/docs/pulling-cached-images