Closed ecordell closed 3 years ago
Hello! I am interested in learning and contribute here but I am new to this platform. Please guide me from where I should start.
@Ananya-1106 Hi, thank you for comment :tada: Please see here for starting contribution.
Hey @ecordell, one doubt here. What does this line mean. < The current driver is build around buildah>.
As per my understanding, Is buildah providing storage here or acting as a CSI plugin.
@ecordell, In the above pod example. As per my understanding, it's ok to provide volume to the container using the CSI driver. But why mounting it with kfox1111/misc:test image.
@ecordell , what is the benefits of mounting content of container image as a volume to a container.
This is really cool! I am interested in this and will be applying.
Interesting. Thanks for working on this issue.
There's another driver being worked on that does cri located here: https://github.com/warm-metal/csi-driver-image
Some discussion around the image-populator dirver and the image driver here: https://github.com/warm-metal/csi-driver-image/issues/12
I think we all need to put our heads together for a bit and weigh the options.
I used buidah for the prototype implementation because it was portable and could use the image cache for multiple instances without consuming extra storage as well as being really simple to implement.
warm-metalcsi-driver-image used cri so that the cache could be used also, but shared with the runtime.
the cp variant described here would be portable too, but would not share any data with the image cache or between multiple instances.
I think the ideal solution would:
@viveksahu26 the the reason I think the image driver is very useful is two fold.
For example, instead of building a container that starts with nginx, and adds your static website content to it, requiring a new container when nginx needs updating, you can deploy the nginx container directly,, then mount your content at /var/www/nginx/html. You can then update either container without updating the other. This is especially useful when you have something like rpm mirrors where you may want the host nginx container to support different architectures as the rpms inside.
Another example, nginx serving out rpm repos for different architectures. It would save needing to build many permutations like: host arch / rpm arch arm64 / arm64 arm64 / x86_64 x86_64 / arm64 x86_64 / x86_64
While if you had image volumes, youd nave: nginx for arm64, nginx for x86_64, and an rpm repo image for x86_64 and one for arm64. saving quite a bit of space.
Totally agree with @kfox1111
I operate an internal cp
solution with a large number of pods and it is not working well:
cp
in the image means the image must be executable or you need to inject a statically linked binary. Both have annoying costs.cp
consumes a lot of IOPS, so much so that our nodes became degraded during node rotation as many new pods were executing cp
simultaneously.cp
adds significant startup time (10-20 seconds for me) to pods compared to establishing a layer/snapshot on an image.cp
wastes the full size of the image for each Volume when we either need no changes (read-only) or very few (read-write snapshot).We are testing https://github.com/warm-metal/csi-driver-image and so far using it for Pod Ephemeral RW Volumes is working well.
I also agree with @kfox1111 and @glennpratt. With no extra data duplication and runtime overhead is a requisite.
There is no standard location or directory structure for an image on-disk across CRI implementations. This means that once an image is on a node, there's no standard way to get its contents.
Though there is no unified location, we can still found the position through CRI API ImageService.ImageFsInfo
, like my project bind-host did.
The directory structure is various because of different container runtimes and their storage drivers. If we want to use those images, we need to know which kind of container runtime is running and how it saves images. The good thing is that there is/will be not so much runtime. I think this requirement is not so common especially on sub-popular runtime. We need not make working with any runtime
as a goal. Currently, we've already known how to implement such a plugin on both containerd and cri-o.
An opposite and hard way, that may be mentioned by @kfox1111 in warm-metal/csi-driver-image#12, is to define new APIs and help runtime implement them.
And, csi-driver-image is going to support cri-o.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
The csi-driver-image-populator is a CSI plugin that allows you to mount the contents of a container image as a volume in a container.
Example:
The current driver is built around buildah, which uses the
registries.conf
configuration to set up a connection to registries.This creates a dichotomy between connections from the cluster to external registries when pulling images for pods vs. pulling images for volumes. Pod image pulls are configured via the cluster CRI implementation, node, and Pod config, while volume image pulls are only configured via the configuration that buildah understands.
This means that proxy configuration, cert configuration, auth information, and mirror information is not shared unless the CRI implementation also understands
registries.conf
(currently this is onlycri-o
)The goals of this project in decreasing order of importance:
CRI Endpoint
There is no standard location for the CRI Endpoint - making a CRI-agnostic CSI driver requires providing the endpoint up front:
or a change to CRI to allow a well-known location (e.g.
/var/run/cri.sock
)CRI Storage Location
There is no standard location or directory structure for an image on-disk across CRI implementations. This means that once an image is on a node, there's no standard way to get its contents.
A solution that requires no modifications to CRI (mocked up here) is to:
cp
It would be nice to find an alternative that:
cp
CRI
The CRI api is defined here
It is worth exploring what changes, if any, would make some of the above goals possible.
OCI Artifacts
Most CRI implementations do not support pulling non-runnable images.
Others
The metadata (image manifest, manifestlist, labels, etc) are all useful information for a consumer of an image, especially for OCI artifacts.
Some ideas: