Closed cgwalters closed 1 month ago
In the automotive world we often think of containers as two possible things. Either they come with the system, and are updated atomically with it, or they are separately installed. They way we expect this to work is for the system ones to be installed in a separate image store that is part of the ostree image. And then the "regular" containers will just be stored in /var/lib/container.
The automotive sig manifests ship a storage.conf that has:
[storage.options]
additionalimagestores = [ /usr/share/containers/storage ]
Then we install containers in the image with osbuild like:
- type: org.osbuild.skopeo
inputs:
images:
type: org.osbuild.containers
origin: org.osbuild.source
mpp-resolve-images:
images:
- source: registry.gitlab.com/centos/automotive/sample-images/demo/auto-apps
tag: latest
name: localhost/auto-apps
options:
destination:
type: containers-storage
storage-path: /usr/share/containers/storage
This was part of the driver for the need for composefs to be able to contain overlayfs base dirs (overlay nesting). Although that is less important if container/storage also uses composefs.
I love the idea of additonal stores for this.
Quadlet supports .image
files now which can be directly referenced in .container
files. Maybe that's a way to achieve a similar effect.
The .image
files don't yet (easily) allow for pulling into an additional store, but this could be a useful feature.
Cc: @ygalblum
Then we install containers in the image with osbuild like:
So IMO this issue is exactly about having bootc install
and bootc update
handle these images. Because as is today, needing to duplicate the app images in an osbuild manifest is...unfortunate. With this proposal, when osbuild is making a disk image, it'd use bootc install
internally to the pipeline, and we wouldn't need to re-specify the child container images out of band of the "source of truth" of the parent image.
Then we install containers in the image with osbuild like:
So IMO this issue is exactly about having
bootc install
andbootc update
handle these images. Because as is today, needing to duplicate the app images in an osbuild manifest is...unfortunate. With this proposal, when osbuild is making a disk image, it'd usebootc install
internally to the pipeline, and we wouldn't need to re-specify the child container images out of band of the "source of truth" of the parent image.
I understand that, and I merely pointed out how we currently do it in automotive, not how it would be done with bootc.
Instead, what I propose is essentially:
Dockerfile:
FROM bootc-base
RUN podman --root /usr/lib/containers/my-app pull quay.io/my/app
ADD my-app.container /etc/containers/systemd
my-app.container:
[Container]
Image=quay.io/my/app
PodManArgs=--storage-opt=overlay.additionalimagestore=/usr/lib/containers/my-app
And then you have an osbuild manifest that just deploys the above image like any normal image.
Of course, instead of open-coding the commands like this, a tool could do the right thing automatically.
You might also want the tool to tweak the image name in the quadlet to contain the actual digest so we know that the exact right image version is used every time.
Its also interesting to reflect on the composefs efficiency in a setup like this.
If we use composefs for the final ostree image, we will get perfect content sharing, even if each of the individual additional-image-stores use its own composefs objects dir. Even if no effort is made to try to share object files between image store directories. Because all the files will eventually be deduplicated as part of the full ostree composefs image.
In fact, we will even deduplicate files between image stores that use the traditional overlayfs or vfs container store formats.
In fact, maybe using vfs backend is the right approach here? It is a highly stable on-disk format, and its going to be very efficient to start such a container. And we can ignore all the storage inefficiencies, because they are taken care off by the outer composefs image.
my-app.container:
[Container] Image=quay.io/my/app PodManArgs=--storage-opt=overlay.additionalimagestore=/usr/lib/containers/my-app
Just wanted to note that --storage-opt
is a global argument. So, the key to use is GlobalArgs
instead of PodmanArgs
.
I wonder if we should tweak the base images to have a standardized /usr location for additional image store images.
/usr/lib/containers/storage?
@rhatdan Yeah, that sounds good to me. Can we perhaps just add it alwas to our /usr/share/containers/storage.conf file?
You want that in the default storage.conf in containers/storage?
If you setup an empty additionalstore you need to precreate the directories and lock files. This is what we are doing to setup an empty AdditonalStore. We should fix this in containers/storage to create these files and directories if they do not exists.
RUN mkdir -p /var/lib/shared/overlay-images \
/var/lib/shared/overlay-layers \
/var/lib/shared/vfs-images \
/var/lib/shared/vfs-layers && \
touch /var/lib/shared/overlay-images/images.lock && \
touch /var/lib/shared/overlay-layers/layers.lock && \
touch /var/lib/shared/vfs-images/images.lock && \
touch /var/lib/shared/vfs-layers/layers.lock
@rhatdan Would it maybe be possible instead to have containers/storage fail gracefully when the directory doesn't exist?
Yes that is the way it should work. If I have time I will look at it. Basically ignore the storage if it is empty.
Actually I just tried it out, as long as the additional image store directory exists, the store seems to work. No need for those additonal files and directories.
cat /etc/containers/storage.conf
[storage]
driver = "overlay"
runroot = "/run/containers/storage"
graphroot = "/var/lib/containers/storage"
[storage.options]
pull_options={enable_partial_images = "true", use_hard_links = "false", ostree_repos=""}
additionalimagestores = [
"/usr/lib/containers/storage",
]
Additional store directory is empty
ls -l /usr/lib/containers/storage/
total 0
podman info
...
So podman will write to the empty directory and create
# ls -lR /usr/lib/containers/storage/
/usr/lib/containers/storage/:
total 4
drwx------. 2 root root 4096 Nov 24 07:03 overlay-images
/usr/lib/containers/storage/overlay-images:
total 0
-rw-r--r--. 1 root root 0 Nov 24 07:03 images.lock
So podman will write to the empty directory and create the missing content.
If the file system is read-only it fails.
podman info
Error: creating lock file directory: mkdir /usr/lib/containers/storage/overlay-images: read-only file system
So, I've been thinking about the details around this for a while. In particular about the best storage for these additional image directories. The natural approach would be to use the overlay backend, as we can then use overlay mounts for the actual container, but this has some issues.
First of all, historically, ostree doesn't support whiteout files. This has been recently fixed, although even that fix requires adding custom options to ostree. In addition, if ostree is using composefs, there are some issues with encoding both the whiteouts as well as the overlayfs xattrs in the image. These are solved by the overlay xattr escape support I have added in the most recent kernel, although we don't yet have that backported into the CS9 kernel.
However, I wonder if using overlay directories for the additional image dir is even the right approach? All the files in the additional image dir will anyway be deduplicated by ostree, so maybe it would be better if we used an approach more like the vfs backend, where each layers is completely squashed (and then we rely on the wrapping ostree to de-duplicate these). Such a layer would be faster to setup and use (since it is shallower), and fix all the issues regarding whiteouts and overlay xattrs.
I see two approaches for this:
Opinions?
So there's two totally different approaches going on here (and the second approach has two sub-approaches):
In this model, bootc upgrade
and bootc rollback
will also upgrade/rollback the system images "naturally", the same way as any other files. (There's a lot of discussion above about the interactions with whiteouts/composefs/etc. though)
From the UX point of view, a really key thing is there is one container image - keeping the problem domain of "versioning/mirroring" totally simple.
However...note that this model "squashes" all the layers in the app images into one layer in the base image, so on the network, e.g. the base image used by an app changes, it will force a re-fetch of the entire app (all its layers), even if some of the app layers didn't change.
I think there's also the converse problem - unless we very carefully ensure that the podman pull
or equivalent that generates the layer is fully reproducible (e.g. timestamps) it means any updates to the base image will generate a different squashed app layer, which is also quite problematic. (Forcing a new storage in the registry)
In other words, IMO this model breaks some of the advantages of the content-addressed storage in OCI by default. We'd need deltas to mitigate.
(For people using ostree-on-the-network for the host today, this is mitigated because ostree always behaves similarly to zstd:chunked and has static deltas; but I think we want to make this work with OCI)
Longer term though, IMO this approach clashes with the direction I think we need to take for e.g. configmaps - we really will need to get into the business of managing more than just one bootable container image, which leads to:
A common advantage/disadvantage of the below is that the user must manage multiple container images for system installs - e.g. for a disconnected/offline install they must all be mirrored, not just one.
I am sure someone has already invented this, but I think we should suppport a "rollup" OCI artifact that (much like a manifest list) is just a pointer to a bunch of other container images. A bit like the OCP "release image" except not an executable itself.
Then tools like skopeo copy
would know how to recurse into it and mirror all the sub-images, and bootc install
could honor this image. bootc would learn about this too, so bootc upgrade
would find all the things.
In this model, the app images would only be referenced from the base image as .image
files.
We would teach bootc install
(i.e. at disk write time) to support "pre-pulling" container images referenced by /usr/share/containers/systemd/*.image
files in the tree (using the credentials embedded in the base image) - but physically the container images live in /var
in the final installed filesystem.
(There's an interesting sub-question here of whether we do this by default for .image
files we find)
Anyways though, here these images are disconnected from the base image lifecycle; bootc upgrade/rollback
would not affect them. They can be fully uninstalled (though to do so the .image
file would need to be masked). Updates to them work by fetching from the registry directly.
A corollary to this is that for e.g. disconnected installs, the user must mirror all the application container images too.
This for example is the model used AFAIK by Fedora Workstation when flatpaks are installed - they are embedded in the ISO, but live in /var
.
A key aspect of this would be that like "loose binding", the container images would be fetched separately from a registry. For disconnected installs, the admin would need to mirror them all. But we wouldn't lose all the efficiency bits of OCI.
This is what I was getting at originally; the images would still live in /var/lib/containers
(I think), but bootc upgrade
would enforce that the referenced .image
files in the new root are pre-fetched before the next boot.
Hmm...more generally really I think we may need to drive something into podman where instead of .image
files effectively expanding into an imperative invocation of podman pull
, things like podman image prune
would at least optionally know how to not prune the images. On a bootc system, we'd make sure to wire things up so that podman would avoid pruning images referenced from .image
files in both the booted root and the rollback.
That said, again once we switch to podman storage for bootc then it may just make more sense to physically locate the images in the bootc container storage and have bootc own all updates/GC.
I am sure someone has already invented this, but I think we should suppport a "rollup" OCI artifact that (much like a manifest list) is just a pointer to a bunch of other container images. A bit like the OCP "release image" except not an executable itself.
I saw this go by: https://opencontainers.org/posts/blog/2023-07-07-summary-of-upcoming-changes-in-oci-image-and-distribution-specs-v-1-1/#2-new-manifest-field-for-establishing-relationships Although, it seems like it's almost the inverse of what we want here. I guess in the end, maybe things like "super image" are just a special case of manifest lists.
Some discussion about this on the podman side in https://github.com/containers/podman/issues/22785
One discussion that intersects with parts of this issue happened in https://github.com/containers/podman/discussions/18182#discussioncomment-5925088. In short: we discussed how we can mark images to be un-removable.
Implemented in https://github.com/containers/bootc/pull/659.
@ckyrouac and I had a discussion and came up with a new proposed design. The original comment is edited but in a nutshell we propose to create /usr/lib/bootc/bound-images.d
which is a set of symlinks to existing .image
files. We will error out if we detect systemd specifiers in use.
which is a set of symlinks to existing
.image
files. We will error out if we detect systemd specifiers in use.
I think binding it to Quadlet will hurt longterm maintenance as it breaks separation of concerns. Having dedicated file of set of files in .d
with a simple syntax doesn't run into the technical troubles. There are use cases outside of running containers under systemd which wouldn't be forced to fiddle with Quadlet.
I won't block but still think that a new config file is cleaner and easier to maintain longterm.
I think binding it to Quadlet will hurt longterm maintenance as it breaks separation of concerns.
Can you edit the comment at the top that has the set of Pros/Cons and clarify your concerns there?
One thing I found compelling about the symlink approach is when I realized this downside with the separate file:
The admin will need to bump a :sha256 digest in two places to update in general (both in a .container or .image and the custom .toml here)
However overall, I think we have explored the space pretty well and just need to make a decision and since @ckyrouac is doing the work I think it's his call, unless you have more information to add to the Pros/Cons.
Can you edit the comment at the top that has the set of Pros/Cons and clarify your concerns there?
Thanks, done :heavy_check_mark:
The admin will need to bump a :sha256 digest in two places to update in general (both in a .container or .image and the custom .toml here)
That's already the case. The proposal doesn't cover .container or .kube files, so admins are forced to move things into .image files.
That's already the case. The proposal doesn't cover .container or .kube files, so admins are forced to move things into .image files.
This was only lightly touched on but it does currently cover .container
files - we would parse those and find their referenced Image=
and I expect that to be the default case. Handling .kube
directly would be an obvious extension to that as well.
One other thing to ponder here is related to https://github.com/containers/bootc/issues/518
Basically if you look at this from a spec/status perspective; we effectively have a clear spec that is readable by external tooling: the "symlink farm". It's not reflected in bootc status
- should it? I'm not sure.
What we don't directly have is status; while I think we'll end up doing the podman create
in order to pin, that still allows things like podman system reset
or just a plain podman rm -f
. Should bootc also expose a status for this in our status fields? I think so.
Perhaps the status is just a boolean pinnedImagesValid: true|false
(or maybe we go all the way to a condition).
I also wonder if we may need an explicit verb to re-synchronize in the case of a podman system reset
? Or maybe just typing bootc upgrade
again should do that.
@vrothberg can we dig in a bit into the high level design here of whether this should use additionalstores or not?
In the current code, it doesn't. I see pros and cons to both approaches.
One way to think about this is I see a continuum between "floating" "logically bound" and "physically bound".
With "physically bound" is that the images are officially read-only, podman rm
isn't going to ever work, etc.
However...IMO, for logically bound images I can see people also wanting to do dynamic updates to them apart from a bootc upgrade
. Yes, you might also queue an update to them on the base bootc image side, but for a number of workloads it could make total sense to do it dynamically outside of the host.
Take e.g. an OpenShift control plane node with etcd. We want etcd always there by default - but it's also totally sane and valid to rev etcd for a hotfix apart from updating the host.
The images being in the "default mutable /var/lib/containers" storage makes the use case of dynamic updates work pretty seamlessly I believe, whereas with a separate additional store I think introduces some confusion/friction there.
The choice of an additional store for logically bound is pretty consequential though and so I think it makes sense to try to figure out now.
Tangential, but if we do choose to use an additional store, I think we should put it under /sysroot/ostree
or so, making it visible to the host podman as /usr/share/bootc/storage
or something? It's basically where the host bootc storage is, and it's basically what https://github.com/containers/bootc/pull/215 is doing.
If you do an update on an image in the primary store, it will use the tag in the primary store.
Example, If I had alpine:latest in additional store and did a podman pull alpine and downloaded a different image into the primary store, then podman images and all tools would use the alpine:latest in the primary store.
This could be an issue if later we pulled an image into the bootc image additional store that is newer then the alpine in the primary store.
Bottom line for now we could just indicate in the .Image and .Container files to use an addional store to protect the images. But this would force the quadlets to always use the images in the additionalstore.
The big advantage of the additional store, is we have it now, and do not need to wait for some future podman release.
@rhatdan It's a bit unclear to me, are you arguing for or against using an additionalstore by default for logically bound images? (And does the answer depend on "short term" vs "medium term"?)
I am giving point/counter point. I don't think we necessarily want bootc to force an additional store, but we might want to take advantage of one in RHEL AI. A lot of this is talking out loud.
But I think we could just use standard stores and tell users "don't do that" if they attempt to do a podman image prune, bootc or starting a quadlet would pull the image.
As Dan mentioned, if you have image A in an additional store and force-pull it a newer one, it will be pulled into the primary store. The primary will always take precedence over additional stores when looking up local images.
This could be an issue if later we pulled an image into the bootc image additional store that is newer then the alpine in the primary store.
I think we can always construct a situation where the user may do something they shouldn't do. We cannot protect against that.
I think additional stores are the way to go as they were designed with this use case (read-only images) in mind.
For Quadlets in general I see benefits of using additional stores as it's one more protection from the user accidentally removing an image.
OK. I'm increasingly convinced, however the basic mechanics of wiring this up are going to be somewhat nontrivial.
:new: https://github.com/containers/bootc/pull/659 landed with a very MVP functionality; however I think we should have basic docs and tests next.
Beyond "absolute MVP" functionality here is things like:
podman create
) or using an additional store per above discussionCan you also change the usr/lib/bootc-experimental/bound-images.d directory name to just /usr/lib/bootc/nound-images.d?
Can you also change the usr/lib/bootc-experimental/bound-images.d directory name to just /usr/lib/bootc/nound-images.d?
IOW you want the image to be not experimental and maintained ~forever?
I want the concept to be managed forever.
If RHEL AI uses it, We need it for the next X years.
If you change the format of the files, I don't care. but starting out with saying something is experimental in the RHEL world should be a non-starter.
We're already shipping things classified as experimental (xref https://github.com/containers/bootc/issues/690 ); I think it's an essential way to get feedback without committing to an interface immediately.
As far as stability, in theory we could allow usage of an experimental interface, but we just need to keep it around as long as the known consumers use it.
That all said, OK...the feature as is today is sufficiently small that perhaps it can just be stable to start for the next release.
I don't care if you document something as experimental, but putting it into the file system, makes it difficult to transition, when it is no longer experiemental. I just want the directory renamed.
I don't care if you document something as experimental, but putting it into the file system, makes it difficult to transition, when it is no longer experiemental. I just want the directory renamed.
This was changed in https://github.com/containers/bootc/pull/714
OK, the more I play with this the more I am coming to the conclusion it makes sense to put bound images in the "bootc storage". Which...doesn't yet exist, but should. I will write up a separate issue.
EDIT: done in https://github.com/containers/bootc/issues/721
So...an interesting semantic with logically bound images as they exist today (writing to the default shared /var/lib/containers
) is that by default if you're tracking a floating tag (e.g. :latest
) for the image, each time bootc upgrade
is invoked we will only re-fetch the tag when the base image changes.
But...when that does happen, the updated bound image is immediately visible.
Some people will want to invoke e.g. bootc upgrade
without rebooting just as a way to "pre-fetch" an upgrade. In that scenario, if you have logically bound images, if any container referencing them happens to restart, it will suddenly see the new image.
It would hence feel more predictable to me if we made logically bound images default to only appearing in their referenced root. It is more likely that we can implement that on top of https://github.com/containers/bootc/issues/721 but it's still quite nontrivial.
OTOH...as I said in some other place I can actually see it being quite useful for users to pre-update logically bound images (can I acronym as LBI? just here?) ok yes LBIs outside of the default host update lifecycle.
But if we go that path...it seems certainly far cleaner to offer an explicit bootc image upgrade
or so that explicitly doesn't touch the host, instead of just offering bootc upgrade
which would do it via side effect?
Or...of course alternatively, bootc upgrade
could check for all LBI updates (for versioned tags) even if the host image didn't change. That would feel more consistent too.
@ckyrouac opinions on :arrow_up: ?
Maybe we just for now strongly discourage floating tags for LBIs, and document the semantic that they will only update when the host changes.
Something I also am realizing related to this is that bootc upgrade --check
won't give you any info for LBIs at all; i.e. we lose the ability to know how much data we'll download in advance at all. The users who choose to use this may not care...at first. But...this may argue for creating a build process that can ensure that the bootc base image's manifest can reference the LBIs at a metadata level.
There's a lot of advantages to that, but it would be Hard to do in a Containerfile flow today without going all the way to something like FROM oci-archive.
It would hence feel more predictable to me if we made logically bound images default to only appearing in their referenced root. It is more likely that we can implement that on top of https://github.com/containers/bootc/issues/721 but it's still quite nontrivial.
I think this makes the most sense. I haven't had a chance to look closely at your draft PR to use an additional store for bound images, but this is how I expect it to work. e.g. when upgrading a bootc system that has a new bound-image, we would pull the bound-image into the staged root's storage. This seems to make the most sense if the additional store will be in /usr which is not supposed to change. Since we'll no longer be using the shared storage, I think we'll need to first check the booted root for the image and copy it to the staged root to avoid re-downloading it, or something to avoid re-downloading the image every upgrade.
That doesn't address the issue of how to handle floating tags though. I'm not really sure how we can make binding to :latest
be predictable since it will depend on when an upgrade happens, or what was on the build system when a disk is created, etc. I think the best we could do is be clear in our docs how floating tags will behave, make bootc status
clearly show which version is booted/staged, maybe some lint checks to discourage using :latest
(although there could be other floating tag names). I need to think about this more.
Logically bound images
Current documentation: https://containers.github.io/bootc/logically-bound-images.html
Original feature proposal text:
We should support a mechanism where some container images are "lifecycle bound" to the base bootc image.
A common advantage/disadvantage of the below is that the user must manage multiple container images for system installs - e.g. for a disconnected/offline install they must all be mirrored, not just one.
In this model, the app images would only be referenced from the base image as
.image
files or an equivalent.This contrasts with physically bound images.
bootc logically bound flow
bootc upgrade
follows a flow like:Current design: symlink to
.image
or.container
filesIntroduce
/usr/lib/bootc/bound-images.d
that is symlinks to.image
files or.container
files.Pros:
:sha256
digest in one place to updateCons:
.image
file is intended to pull images not be parsed by an external tool for a separate purpose.Note: we expect the
.image
files to reference images by digest or immutable tag. There is no mechanism to pull images out of band.Other alternatives considered
New custom config file
A new TOML
/usr/lib/bootc/bound-images.d
, of the form e.g.01-myimages.toml
:Pros:
.image
fileCons:
.image
files:sha256
digest in two places to update in general (both in a.container
or.image
and the custom.toml
here)Parse existing
.image
filesPros:
Cons:
bootc=bound
or equivalent opt-inWhat would happen under the covers here is that bootc would hook into podman and:
bootc upgrade
TODO:
bootc install to-filesystem
- simple scenario w/out pull secret?