openshift / appliance

OpenShift-based Appliance Builder
Apache License 2.0
21 stars 24 forks source link

OCI runtime error after image update #246

Open schmti opened 1 week ago

schmti commented 1 week ago

I have updated the image: quay.io/edge-infrastructure/openshift-appliance

from: openshift-appliance:8592b8568eb04c510a8cfbd23e44497abd746d6de07eb5ee0bbed8ae96c0b24c to: openshift-appliance:b5336135f594db8b746cd0d0d97ba70c6d3466194d0a8531908c4d5fec3c55f2

and now facing this error:

level=error msg=Failed to pull OpenShift 4.14.27 release images required for bootstrap level=fatal msg=failed to fetch Appliance disk image: failed to fetch dependency of "Appliance disk image": failed to generate asset "Data ISO": registry start failure: Failed to execute cmd (/usr/bin/podman run --net=host --privileged -d --name registry -v /assets/temp/data/oc-mirror/bootstrap:/var/lib/registry --restart=always -e REGISTRY_HTTP_ADDR=0.0.0.0:5005 docker.io/library/registry:2): Trying to pull docker.io/library/registry:2... level=fatal msg=Getting image source signatures level=fatal msg=Copying blob sha256:ef4f267ce8ed8e998f5225b975fa899a685c7b4795eb989fa1b87acaaee4f179 level=fatal msg=Copying blob sha256:619be1103602d98e1963557998c954c892b3872986c27365e9f651f5bc27cab8 level=fatal msg=Copying blob sha256:74e12953df9580cfa53c844f2b8f373f61c107008d16a88f735929b51649bee5 level=fatal msg=Copying blob sha256:862815ae87dc680eb8ecc779dba6e6aad38a782132ce92b571f705abdd7cbfc6 level=fatal msg=Copying blob sha256:6f0ce73649a0f93ee8f5dc39a85a6cbb632700acf4c6e6370858c4957d33ef9d level=fatal msg=Copying config sha256:d6b2c32a0f145ce7a5fb6040ac1328471af5a781f48b0b94e3039b29e8a07c4b level=fatal msg=Writing manifest to image destination level=fatal msg=time="2024-06-15T22:12:15Z" level=warning msg="Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup path /libpod_parent/conmon: write /sys/fs/cgroup/cgroup.subtree_control: device or resource busy" level=fatal msg=Error: OCI runtime error: crun: the requested cgroup controller pids is not available level=fatal msg=: exit status 126

Which changes are causing this error?

Many greetings, Tim

danielerez commented 1 week ago

Hey Tim! Can you please paste the command used for building the disk image? Have you executed it with sudo? Also, to ensure it's not an issue with podman, try to manually run a container from any image and check the result.