Open everpeace opened 3 years ago
We're currently rolling out our own dragonfly deployment. Our set up is slightly different as we are using an internal private registry rather than quay.io or docker.io, so YMMV with this being helpful or not. We've been meaning to update some of the docs to make some of this more clear. Let me know if this is helpful, hopefully it is!
One important thing to note about your set up is that docker will not send credential headers to a docker repo if you set it to be an insecure repo[0], so if you need to eventually pull down private images you will need to set up your own internal proxies that either inject the proper auth headers or you will need to MITM yourself. I realize it's not great to recommend that someone MITM themselves, but here we are.
Overall our dragonfly deployment is pretty straight forward. We have a central docker registry, multiple caching nginx proxies that sit in front of the registry, and in each datacenter we have 2 dragonfly supernodes and n dragonfly clients per datacenter.
The supernode configuration is very straight forward so it's not really worth linking to a config example. The only thing to note is that we bumped up the GC time for cached objects so the supernodes would hold on to the objects for a longer period of time.
The client configuration is a bit less straight forward[1], but you can see that we chose to MITM ourselves. We chose this because it would have required a massive rolling restart of our entire docker-based infrastructure, which would have meant a rolling restart of every service running in docker. In an ideal world we would just pave over the world again, but we're not there... yet.
Looking at my notes, the decided path of least resistance for having clients reach out to docker.io or quay.io through dragonfly would be to go the proxy route where we would have an internal nginx deployment that accepted requests for quay-io-internal.foo.example.com
and would from there forward all requests to quay.io. You may want to look into this. You could even have your supernodes perform this functionality. Again, this was our path of least resistance.
We also bump up the GC time on the client via the expiretime
flag to keep objects in the p2p network a bit longer.
Again, I hope this helps.
0 - https://docs.docker.com/registry/insecure/#deploy-a-plain-http-registry 1 - https://gist.github.com/poblahblahblah/cd26e7410d7ed4ed6ed989257142fbe7
Hi, Thanks for sharing the great project.
I would like to create dragonfly cluster as a pull-through cache proxy for private registies. But I'm confused in how to configure properly
Question
I set up my experimental environment with docker-compose(https://github.com/everpeace/test-dragonfly-proxy) by reading the document, especially for two documents below:
My environment consists of:
1
supernode2
dfdaemons (with docker daemons)quay.io
image blobsquay.io
as an insecure registry for dfdaemon's hijacking.docker pull
in dfdaemon nodes looks like pulling images through dragonfly successfully. However, it does NOT seem to cache images in dragonfly. The seconddocker pull
doesn't make it faster at all (see the next section). I'm wondering how to configure properly?? I'm very happy if I got the community's help.How to reproduce
In another terminal, please try to pull a sample image in
dfdaemon1
:you'll see below dfdaemon logs (it can see
dfdaemon
invokesdfget
):It is expected that
docker pull
indfdeamon2
makes it faster because dragonfly already cached. However, it takes almost the same time. And it can't appear anydownloading piece
dfdaemon logs described in https://github.com/dragonflyoss/Dragonfly/tree/master/docs/quick_start#step-5-validate-dragonfly .