Open rhatdan opened 5 months ago
@cgwalters @lmilbaum PTAL
TASK [Install image with bootc-0.1.8 or later] *********************************
Saturday 20 April 2024 09:40:14 -0400 (0:00:00.978) 0:01:16.216 ********
fatal: [guest]: FAILED! => changed=true
cmd:
- podman
- run
- --rm
- --privileged
- --pid=host
- -v
- /:/target
- -v
- /dev:/dev
- -v
- /var/lib/containers:/var/lib/containers
- --security-opt
- label=type:unconfined_t
- quay.io/redhat_emp1/*****:5hh3
- bootc
- install
- to-existing-root
delta: '0:00:03.220843'
end: '2024-04-20 09:40:17.977253'
msg: non-zero return code
rc: 1
start: '2024-04-20 09:40:14.756410'
stderr: "\e[31mERROR\e[0m Installing to filesystem: Creating ostree deployment: Performing deployment: Creating importer: Failed to invoke skopeo proxy method OpenImage: remote error: reference \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]quay.io/redhat_emp1/*****:5hh3@sha256:7ce2994e76d6427259096dce04981749dac9f537536f919106d55788e1974237\" does not resolve to an image ID: identifier is not an image"
stderr_lines: <omitted>
stdout: |-
Installing image: docker://quay.io/redhat_emp1/*****:5hh3
Digest: sha256:7ce2994e76d6427259096dce04981749dac9f537536f919106d55788e1974237
Initializing ostree layout
Initializing sysroot
ostree/deploy/default initialized as OSTree stateroot
Deploying container image
stdout_lines: <omitted>
Hmmmm, this breaks bootc install
. I will look at this.
This is because OSTree on OSTree
(Side note, I think you meant overlay on overlay, right?)
Anyways what's going on here I believe is that the VOLUME
here seems to win over the explicit -v /var/lib/containers/storage:/var/lib/containers/storage
we need to extract the host container content in bootc install
.
IOW there's two conflicting things going on in the install phase:
So...hmm. We could probably fix this by having the bind mount we generate be e.g. -v /var/lib/containers/storage:/tmp/containers-storage
and explicitly referencing that in bootc. But, that's going to take a while to churn through the ecosystem, and unfortunately we have a lot of copied/hardcoded references to the podman run
invocation, so longer term we'd really need to get into the "stop requiring explicit mounts and fetch things from the host mountns internally".
But, just thinking about this...couldn't we also fix this with a systemd unit in the container image that e.g. mounts a tmpfs
on /var/lib/containers
right? (The downside of that is that it will be limited to RAM capacity as opposed to spilling to the disk directly, unless swap is in use).
So another fix is probably instead having VOLUME /var/lib/containers/temp-storage
and having a systemd unit explicitly bind mount that as /var/lib/containers/storage
if we detect we're running as a container?
BTW can you link me to your canonical test case here? We need to document this. I came across https://github.com/containers/podman/issues/5188
There's
--volume /tmp/podman:/var/lib/containers/storage there most notably.
I think it would likely be very helpful here longer term if podman detected if it was being run inside an existing container and did some automatic tweaks.
So the issue is you are running centos-bootc and doing an bootc install
, does the contents of /var/lib/containers/storage not get installed? Usually this would be empty?
bootc install
works by having the container fetch itself from the host container storage.
(We don't just copy the /
from the running container root because we want to preserve the layer structure for future incremental updates)
When running a bootc image as an OCI contianer, embeded containers will fail. This is because OSTree on OSTree is not allower. Defaulting i /var/lib/containers/storage to a Volume fixes the problem.