mrunalp / ocid

ocid
Apache License 2.0
8 stars 5 forks source link

Create Pod Sandbox #6

Open mrunalp opened 8 years ago

mrunalp commented 8 years ago

This is a rough flow that I have in mind.

  1. Check if a configured image for the sandbox exists in local repo. For e.g. /var/lib/oci/images/sandboximage
  2. If not, then pull using containers/image library
  3. Use containers/storage to create rootfs /var/lib/oci/containers/container-id/storage-type/sandboximage. For cases such as sandboximage, we need not even use container id as this could be shared by all pods. The storage API should take parameters to allow such use cases.
  4. Use ocitools generate library to create a template from the parameters specified in Request object and merge config from the image.
  5. Launch runc using the rootfs and config.json
  6. Monitor the sandbox container (there are various sub tasks here that we can drill into later like managing logs and handling cgroups, etc).

This would require ocid to take as a flag the name of the sandbox image type. A flag will be needed to pick default storage and a way to override to share a readonly rootfs as described above.

mrunalp commented 8 years ago

@rhatdan @runcom @nalind PTAL.

runcom commented 8 years ago

4.1 merge config from image

The rough flow seems fine to me. What is really missing which is somehow still part of this flow is the CAS storage where images are pulled, indexed, cached etc etc. And from where libcow is kicking in. @nalind does it make sense?

mrunalp commented 8 years ago

@runcom Updated to add your suggestion of merging config. We do need to figure out how much of the image logic will be in the daemon and how much in the library.

runcom commented 8 years ago

re: flag for image type - we would leverage the abstraction made in containers/image where an image type has a prefix defining the technology/transport (such as I want to run a container based on the docker busybox image, so containers/image firstly download from Docker registry and then store it into the image storage)

rhatdan commented 8 years ago

Yes the question of storage is key here. In storage we want to be able to support the "networked storage case". If I go to run the "foobar" container and I have the "foobar" rootfs available via NFS I want to use this rather then pull the image to the host. So we need the storage layer to be smart enough to understand the configuration of the image store(s).

I believe we have four different components interacting to make this happen.

In the quick design you defined above, I think it would be helpful if we broke down, which component was responsible for each action.

  1. Check if a configured image for the sandbox exists in local repo. For e.g. /var/lib/oci/images/sandboximage (storage)
  2. If not, then pull using containers/image library (image)
  3. Use containers/storage to create rootfs /var/lib/oci/containers/container-id/storage-type/sandboximage. For cases such as sandboximage, we need not even use container id as this could be shared by all pods. The storage API should take parameters to allow such use cases. (storage, management/API)
  4. Use ocitools generate library to create a template from the parameters specified in Request object and merge config from the image. (management/API, ocitools)
  5. Launch runc using the rootfs and config.json Monitor the sandbox container (there are various sub tasks here that we can drill into later like managing logs and handling cgroups, etc). (runc)
runcom commented 8 years ago

As part of downloading docker images to be run as OCI runc containers, I've came across registries credential handling and opened https://github.com/containers/image/pull/41.

@mrunalp I'm not familiar with k8s at all, I've a question though.

In a normal Pod creation workflow how does one pass credentials to authenticate against a registry when pulling an image (cli or yaml)? I guess the question is, in our new docker-less scenario, how do we ask or retrieve credentials to user to authenticate against registries given we don't have access to ~/.docker/config.json file? Should ocid provide a way of handling credentials like the docker daemon does and with which kubelet can interact?

mrunalp commented 8 years ago

@runcom Yes, I think we should define our own config for accessing docker and other registries. Also, I think this code in kubernetes may be relevant https://github.com/kubernetes/kubernetes/blob/f2ddd60eb9e7e9e29f7a105a9a8fa020042e8e52/pkg/credentialprovider

runcom commented 8 years ago

Right, that code's relevant but it's assuming .docker/config.json (or the old one) to be present on the host (skopeo does that as well in a similar manner).

BTW, I believe this is already possible when creating the yaml pod specification (based on this reply http://stackoverflow.com/a/36280670). This way the CRI can receive the AuthConfig (https://github.com/kubernetes/kubernetes/pull/25899/files#diff-b99b84f6471ccf2077dedc93530a51a2R401) populated and containers/image can use that struct in OCID to retrieve username/password to authenticate.

mrunalp commented 8 years ago

@runcom Yep, we should be able to use that.

mrunalp commented 8 years ago

@rhatdan I would imagine that we need some config per node as well as per pod decorations to configure storage. Flags like preferNFS could be in the config and shareReadOnly could be passed down through the API. WDYT?

rhatdan commented 8 years ago

SGTM

runcom commented 8 years ago
  1. If not, then pull using containers/image library

@mrunalp does containers/image kicks in as part of the RuntimeService or the ImageService? What I mean is, doesn't point number 1) already know that and the answer is YES because the kubelet already pulled the image as part of the ImageService pull operation? and we already have that image available in the node.

at least based on https://github.com/kubernetes/kubernetes/pull/17048/files#diff-822f0e081c10d8b83d7c2ad1391d55f7R85

mrunalp commented 8 years ago

@runcom Yes, we could write a test wrapper that does the image pull before creating the sandbox or starting a container. This could work to simulate kubelet integration till kubelet client changes are done.

runcom commented 8 years ago

I'd love to split the first 1) and 2) points to go into the ImageService which as I understand it, it's total different service which the kubelet queries to work on images. The other points belong to the RuntimeService instead. I'll generate a stub for ImageService to begin with.