Open mrunalp opened 8 years ago
@rhatdan @runcom @nalind PTAL.
4.1 merge config from image
The rough flow seems fine to me. What is really missing which is somehow still part of this flow is the CAS storage where images are pulled, indexed, cached etc etc. And from where libcow is kicking in. @nalind does it make sense?
@runcom Updated to add your suggestion of merging config. We do need to figure out how much of the image logic will be in the daemon and how much in the library.
re: flag for image type - we would leverage the abstraction made in containers/image where an image type has a prefix defining the technology/transport (such as I want to run a container based on the docker busybox image, so containers/image firstly download from Docker registry and then store it into the image storage)
Yes the question of storage is key here. In storage we want to be able to support the "networked storage case". If I go to run the "foobar" container and I have the "foobar" rootfs available via NFS I want to use this rather then pull the image to the host. So we need the storage layer to be smart enough to understand the configuration of the image store(s).
I believe we have four different components interacting to make this happen.
In the quick design you defined above, I think it would be helpful if we broke down, which component was responsible for each action.
As part of downloading docker images to be run as OCI runc containers, I've came across registries credential handling and opened https://github.com/containers/image/pull/41.
@mrunalp I'm not familiar with k8s at all, I've a question though.
In a normal Pod creation workflow how does one pass credentials to authenticate against a registry when pulling an image (cli or yaml)?
I guess the question is, in our new docker-less scenario, how do we ask or retrieve credentials to user to authenticate against registries given we don't have access to ~/.docker/config.json
file? Should ocid provide a way of handling credentials like the docker daemon does and with which kubelet can interact?
@runcom Yes, I think we should define our own config for accessing docker and other registries. Also, I think this code in kubernetes may be relevant https://github.com/kubernetes/kubernetes/blob/f2ddd60eb9e7e9e29f7a105a9a8fa020042e8e52/pkg/credentialprovider
Right, that code's relevant but it's assuming .docker/config.json
(or the old one) to be present on the host (skopeo does that as well in a similar manner).
BTW, I believe this is already possible when creating the yaml pod specification (based on this reply http://stackoverflow.com/a/36280670). This way the CRI can receive the AuthConfig (https://github.com/kubernetes/kubernetes/pull/25899/files#diff-b99b84f6471ccf2077dedc93530a51a2R401) populated and containers/image can use that struct in OCID to retrieve username/password to authenticate.
@runcom Yep, we should be able to use that.
@rhatdan I would imagine that we need some config per node as well as per pod decorations to configure storage. Flags like preferNFS
could be in the config and shareReadOnly
could be passed down through the API. WDYT?
SGTM
- If not, then pull using containers/image library
@mrunalp does containers/image kicks in as part of the RuntimeService
or the ImageService
?
What I mean is, doesn't point number 1) already know that and the answer is YES because the kubelet already pulled the image as part of the ImageService
pull operation? and we already have that image available in the node.
at least based on https://github.com/kubernetes/kubernetes/pull/17048/files#diff-822f0e081c10d8b83d7c2ad1391d55f7R85
@runcom Yes, we could write a test wrapper that does the image pull before creating the sandbox or starting a container. This could work to simulate kubelet integration till kubelet client changes are done.
I'd love to split the first 1) and 2) points to go into the ImageService
which as I understand it, it's total different service which the kubelet queries to work on images. The other points belong to the RuntimeService
instead. I'll generate a stub for ImageService
to begin with.
This is a rough flow that I have in mind.
This would require ocid to take as a flag the name of the sandbox image type. A flag will be needed to pick default storage and a way to override to share a readonly rootfs as described above.