Closed coderlol closed 10 years ago
I'll put my ten cents in here: Maven is powerful but it's a) not for everyone and b) not universal, and c) in some corners it's even hated. I think there's a role for integration with Nexus/Artifactory but I don't see it as a replacement for the registry. YMMV :)
/cc @shykes
Yeah, definitely not a replacement. But a potential integration would make sense I think.
On Mon, Dec 23, 2013 at 3:51 AM, James Turnbull notifications@github.com wrote:
I'll put my ten cents in here: Maven is powerful but it's a) not for everyone and b) not universal, and c) in some corners it's even hated. I think there's a role for integration with Nexus/Artifactory but I don't see it as a replacement for the registry. YMMV :)
/cc @shykes
Reply to this email directly or view it on GitHub: https://github.com/dotcloud/docker-registry/issues/168#issuecomment-31115295
Integration would make much sense -- less infrastructure to deploy. The current Registry model is simply too immature. For example, no federation, tagging the actual Registry location in a repository name is just nasty. It's better to have a config file with a set of locations where a repository may be found.
But then again, why re-invent the wheel? Artifactory and Nexus are dead easy to deploy. There is nothing to "hate" about them. There is no need to adhere to the "Maven" model as Artifactory and Nexus can host arbitrary blobs.
Allow me to elaborate my answer.
Integrating with Artifactory/Nexus makes it possible for you to use docker with your favorite tool, and not reinvent the wheel. That's a good thing.
Standardizing on it, on the other hand, makes it mandatory for every docker user. Many of them have another favorite tool which they would have to replace with artifactory - in other words they would have to reinvent the wheel. That is not a good thing.
For us to bring software into the core we need to be comfortable telling every developer and sysadmin that they should use it. There are few programs in the world that qualify for that, and nexus/artifactory is not one of them.
The same is true for almost all software tools: we live in a fragmented world and not everyone has the same preferences. What you consider "agnostic" your neighbour probably calls "opiniated" which is codeword for "we are not from the same tribe therefore your code is irrelevant to me". It's silly but it's true. There are very few things that can be reasonably called standard: posix, tcp/ip, tar, ssl. We're still debating whether git and special filesystem attributes are standard enough to be part of the core!
The docker core should build and be useful on everything from a developer laptop to a farm of octocore servers and a rasberry pi. I don't see how any java artifact repository fits in that picture, no matter how generic it is.
On Sun, Dec 29, 2013 at 13:01, coderlol notifications@github.com="mailto:notifications@github.com"> wrote:
Integration would make much sense -- less infrastructure to deploy. The current Registry model is simply too immature. For example, no federation, tagging the actual Registry location in a repository name is just nasty. It's better to have a config file with a set of locations where a repository may be found.
But then again, why re-invent the wheel? Artifactory and Nexus are dead easy to deploy. There is nothing to "hate" about them. There is no need to use adhere to the "Maven" model as Artifactory and Nexus can host arbitrary blobs.
— Reply to this email directly or view it on GitHub.
Artifactory (don't know about Nexus) already implements RubyGem and Yum endpoints, and I don't think it'd be too difficult for JFrog to implement the docker registry API and then it should work seamlessly with the docker client.
I was going to fiddle with Apache to map Registry Restful API, but I can't get past the fact that the host name including the PROTOCOL port (arrgh) is hard coded in the repository name/tag -- that's just nastier than nasty...;)
And, it seems like Docker is trying to implement its own authentication/authorization scheme? Why waste the energy on re-inventing standard stuff? Use standard stuff so we can easily enjoy HTTP, HTTPS, client cert, basic auth, SSO, etc...
On Sun, Dec 29, 2013 at 10:41 PM, coderlol notifications@github.com wrote:
I was going to fiddle with Apache to map Registry Restful API, but I can't get past the fact that the host name including the PROTOCOL port (arrgh) is hard coded in the repository name/tag -- that's just nastier than nasty...;)
Are you able to articulate why you find it "nasty"? Because I don't agree with you at all.
Here's why we do it: it allows mapping images to URLs. The U in URL stands for "universal", because the same URL should resolve to the same underlying resource, instead of resolving to different resources depending on where on the network you happen to be. This is a very useful property, and mapping image names to URLs allows us to benefit from it.
A second, less important reason is that it mirrors how Go packages are organized, which in practice has proven to be quite usable and not at all "nasty".
Do you any tangible arguments against this approach?
And, it seems like Docker is trying to implement its own authentication/authorization scheme? Why waste the energy on those standard stuff? Use standard stuff so we can easily enjoy HTTP, HTTPS, client cert, basic auth, SSO, etc...
I'm really not sure what you're referring to. Currently the docker registry uses vanilla HTTP auth, and allows you to drop arbitrary http middleware in front of your private registry. There are discussions on adding support for ssl client certificates and AWS-compatible hmac request signature. The most prominent argument in those discussions is how standard these options are and how much code we would be able to re-use.
Why should a user have to be bothered with where desired repositories/resources are located? Users just want a resource X. Does it matter if it is on my computer, on a remote system, on the moon? What if resources need to be relocated? Do users or providers have to go about re-tagging everything?
And hard coding of port number on the repository name/url/whatever that is called, well, that's a winner of nasty of nasties ;) Why do we use DNS instead of IP?
In many ways, the practice of using URL/port hard-coding provides too low of an abstraction level -- the model needs to bring it up a notch.
On Mon, Dec 30, 2013 at 12:49 AM, coderlol notifications@github.com wrote:
Why should a user have to be bothered with where desired repositories/resources are located? Users just want a resource X. Does it matter if it is on my computer, on a remote system, on the moon? What if resources need to be relocated? Do users or providers have to go about re-tagging everything?
And hard coding of port number on the repository name/url/whatever that is called, well, that's a winner of nasty of nasties ;) Why do we use DNS instead of IP?
This criticism is not specific to docker. You are basically criticizing URLs as a means of identifying resources.
"why should a user have to type www.google.com? Users just want google. Does it matter if hosted is hosted on my computer, a remote system, or the moon? What if the website is relocated? Do users and providers have to about changing all their bookmarks?"
"why do web browsers support specifying optional port numbers in a url? why do we use DNS instead of IP?"
Hopefully you can figure out the answer to these questions for yourself.
In many ways, the practice of using URL/port hard-coding provides too low of an abstraction level -- the model needs to bring it up a notch.
Sure. URLs are not perfect. I will be happy to implement a higher-level naming convention, if you have one to suggest.
Well, how about we put the address / URLs for the docker registries in dockercfg and let docker try each of the registry in turn to pull a given repository? That way, you decouple of the name of the repository (data id), its location (URL/servers/ports) and access control (login, client certs, etc).
I think the design of docker artifact distribution model has simply misunderstood / misapplied the use of "URL".
On Mon, Dec 30, 2013 at 12:28, coderlol notifications@github.com="mailto:notifications@github.com"> wrote:
Well, how about we put the address / URLs for the docker registries in dockercfg and let docker try each of the registry in turn to pull a given repository? That way, you decouple of the name of the repository (data id), its location (URL/servers/ports) and access control (login, client certs, etc).
That would mean that when I build a container from source, the result of my build would be completely unpredictable because, for example, "FROM ubuntu:12.04" would yield an entirely different image depending on how my docker installation was configured. Since it might have been configured by site administrators and I might not have permission to change or even view that configuration, I may not even be able to verify what exactly is being built. All I know is that someone, somewhere, decided to call it "ubuntu:12.04".
How do you propose we solve that problem?
I think the design of docker artifact distribution model has simply misunderstood the use of "URL".
It wouldn't be the first time I do something stupid. But so far you're not giving me much substance to find out.
Well, perhaps the following can satisfy the "trust" requirements and still stay flexible without too much implementation complexity.
As a user, I setup a series of trusted registries in, say, dockercfg. In other words, I trust what the registries contain and what the registries say they contain; hence, I add the registries to my dockercfg. Additionally, perhaps, stuff such as MD5/SHA/signed could be added to ensure the integrity of the repository.
The above ought to cover the FROM requirements. Then,
When I do a docker pull ubuntu:12.04, Docker checks my dockercfg for candidate registries that contain ubuntu:12.04.
When I pull specifically from a given registry I could "docker pull -registry https://docker.io:1234 ubuntu:12:04". now, docker will specifically pull from docker.io:1234. Perhaps a few additional command line params for credentials could be added.
When it comes to push, docker could push to all registries in dockercfg and/or docker could push just to a registry specified on the command line?
Btw, ubuntu:12.04 may be too broad a name, so a global namespace such as ubuntu:12.04 (like what docker does now) is reserved specifically to repositories maintained by docker.io.
All other parties will need something like myco.com/ubuntu:12.04 or something similar scheme. So, a repository name can uniquely identify it on a global basis, but at the same time, the registries hosting a given repository could be "any" registry.
Artifactory JIRA: (Pro version now supports this) https://www.jfrog.com/jira/browse/RTFACT-6494
Nexus JIRA: https://issues.sonatype.org/browse/NEXUS-8242
Any plan to use Nexus or Artifactory (Java maven Repo systems) as the registry? Why invent yet another registry model?
The Maven repo model and architecture are quite flexible and powerful.