Open justfly1111 opened 2 years ago
Can confirm issues as well.
Oh dont use the update the script. I gota update that documentation. use the zip files from teh actions]
Oh dont use the update the script. I gota update that documentation. use the zip files from teh actions]
How... ?
same question how
i tried the zip from the actions and it does the same thing and also it isnt persistent thankfully cause it didnt work i got the same error after extracting it to /
trying to update it via the zip from actions seriously fucked my whole system up it accidently deleted unifi-os while doing it and i just got my system up and running 15 hours later after a factory recovery reset @boostchicken can you please give us that request it exact instructions on how to use the zips in the actions so someone else doesnt have the problem i did and so i can correctly update it i need to reinstall all my scripts now and i had a good amount going :(
I'm getting the same issue
same issue here. WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: open /sys/fs/cgroup/libpod_parent/conmon/cpuset.cpus: open /sys/fs/cgroup/libpod_parent/conmon/cpuset.cpus.effective: no such file or directory Error: OCI runtime error: unable to start container "65ee4bb475fb5c0313b7c0b5b80bbe8c1055c59f5a9c9980bae932515fff8aec": container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: bpf_prog_query(BPF_CGROUP_DEVICE) failed: function not implemented
How do I go back to default podman?
Just realised you actually have to remove the boot script, making it non-executable doesn't work.
I am back to original podman for now.
ya @jonwilliams84 you tried using the zip files from actions also? it didnt work for me hopefully it gets fixed and updated soon
Alright, for anyone stuck with this one and want to revert back to something functional:
cd /usr/libexec/podman
# Delete the symbolic link
rm conmon
# Restore the backup file
mv conmon.old conmon
cd /usr/bin
# Delete the symbolic link
rm podman
# Restore the backup file
mv podman.old podman
# Delete the symbolic link
rm runc
# Restore the backup file
mv runc.old runc
# Reverse this change
sed -i 's/driver = "overlay"/driver = ""/' /etc/containers/storage.conf
# Test it out
podman ps
Thanks to https://github.com/boostchicken/udm-utilities/issues/197#issuecomment-870964308 for sharing how to reverse the update script!
same issue as: https://github.com/boostchicken-dev/udm-utilities/issues/233
Which is closed and was "solved" which it clearly isn't.
This probably has to do with using cgroupv1/cgroupv2. Wheras i gues the new podman relies on v2 while v1 is being used on the UDM.
This probably has to do with using cgroupv1/cgroupv2. Wheras i gues the new podman relies on v2 while v1 is being used on the UDM.
looks like the UDM has a hybrid heirachy - if it was just v1 podman would default to using v1, but it seems to be mounted in a really odd way so podman thinks it's v2, even though it isn't (even the old version of podman shows CgroupVersion: v2
).
I tried modifying kexec to add systemd.unified_cgroup_hierarchy=1 to the boot options, but that doesn't seem to play ball either.
Unfortunately, it doesn't look like you can force podman to use v1...
Issue re-appeared after updating to UniFi OS Version 1.11.4.
When trying to acces unifi-os shell I am getting:
``WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: open /sys/fs/cgroup/libpod_parent/conmon/cpuset.cpus: open /sys/fs/cgroup/libpod_parent/conmon/cpuset.cpus.effective: no such file or directory Error: OCI runtime error: panic: expected "name=systemd" path to be unified path "/sys/fs/cgroup/devices/libpod_parent/libpod-3423ae04e99acf5a25ee3b3ba8d20bfdc5d852c67d6a47c638e9dc44aa8eb77b", got "/sys/fs/cgroup/systemd/libpod_parent/libpod-3423ae04e99acf5a25ee3b3ba8d20bfdc5d852c67d6a47c638e9dc44aa8eb77b"
goroutine 1 [running]: github.com/opencontainers/runc/libcontainer.getUnifiedPath(0x40002cd050, 0x5592f35400, 0x0) github.com/opencontainers/runc/libcontainer/factory_linux.go:59 +0x2cc github.com/opencontainers/runc/libcontainer.cgroupfs2.func1(0x40002617a0, 0x40002cd050, 0x4a, 0x7fd1730edf) github.com/opencontainers/runc/libcontainer/factory_linux.go:111 +0x30 github.com/opencontainers/runc/libcontainer.(LinuxFactory).Load(0x400028c000, 0x7fd1730edf, 0x40, 0x0, 0x0, 0x1, 0x8) github.com/opencontainers/runc/libcontainer/factory_linux.go:321 +0x14c main.getContainer(0x4000286160, 0x5592f45e78, 0x8, 0x1, 0x40002533d4) github.com/opencontainers/runc/utils_linux.go:89 +0x9c main.execProcess(0x4000286160, 0x0, 0x0, 0x0) github.com/opencontainers/runc/exec.go:114 +0x30 main.glob..func5(0x4000286160, 0x55933693c0, 0x40002534f8) github.com/opencontainers/runc/exec.go:104 +0x6c github.com/urfave/cli.HandleAction(0x5593036d60, 0x55930e3bc8, 0x4000286160, 0x4000286160, 0x0) github.com/urfave/cli@v1.22.1/app.go:523 +0x124 github.com/urfave/cli.Command.Run(0x5592f41f3e, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x5592f59b5a, 0x28, 0x0, ...) github.com/urfave/cli@v1.22.1/command.go:174 +0x408 github.com/urfave/cli.(App).Run(0x400023c000, 0x40000200e0, 0xe, 0xe, 0x0, 0x0) github.com/urfave/cli@v1.22.1/app.go:276 +0x588 main.main() github.com/opencontainers/runc/main.go:163 +0xa78``
@jperquin What FW were you running before and was the podman update working correctly on that?
Had been running on unifi os 1.11.0 for a while without issue. Then two updates followed in rapid succession (1.11.3 and 4). Not sure if he problem developed on .3 or .4
@jperquin I'm not at home to test the latest FW w/ podman. But I would try grabbing the latest artifact from the Podman update build and reinstalling it. I'm thinking the Ubi upgrade overwrote some files.
Thanks @gatesry. Not sufficiently versed in converting your advice into specific commands (once I am ssh-ed into my UDM-P).. any help is welcome..
Look at this repo's action tab, it includes the build pipelines. See here for the UDM-Pro podman update build (https://github.com/boostchicken-dev/udm-utilities/actions/workflows/podman-udmp.yml)
It looks like they are working on fixing the script- (notice the red X's). But if you see the last working one (green) it has the .zip artifact that you can use to upgrade the files in your box.
any update to this? I'm still having the same issue, especially when using (https://github.com/boostchicken-dev/udm-utilities/actions/workflows/podman-udmp.yml)
I get the following error message:
# ./udm-le.sh initial
Attempting initial certificate generation
Error: OCI runtime error: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: bpf_prog_query(BPF_CGROUP_DEVICE) failed: function not implemented
Any update?
@boostchicken Just to verify that you're saying DO NOT use podman-update/01-podman-update.sh ever?
@boostchicken Just to verify that you're saying DO NOT use podman-update/01-podman-update.sh ever?
Maybe you are able to update the documentation?
Describe the bug A clear and concise description of what the bug is. after running update-podman.sh can no longer run unifi-os shell or podman unifi-os restart or start it
To Reproduce Steps to reproduce the behavior:
run 01-podman-update.sh on most recent beta firmware for udmpro
Expected behavior A clear and concise description of what you expected to happen. it should update and still allow access to unifi-os pod
instead gives a go error i thought i copied it but i just restarted my udmp without the update script i will retry it again if youre unaware of this issue to get the ooutput
UDM Information
Additional context heres the errors output [UDM] root@udmp.justfly.live:/mnt/data_ext/on_boot.d# unifi-os shell WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: open /sys/fs/cgroup/libpod_parent/conmon/cpuset.cpus: open /sys/fs/cgroup/libpod_parent/conmon/cpuset.cpus.effective: no such file or directory Error: OCI runtime error: panic: expected "name=systemd" path to be unified path "/sys/fs/cgroup/devices/libpod_parent/libpod-37b8a3809de3d2f6e3d53c5bc4474bada175ae5b35d892121f759102ec0e0ffa", got "/sys/fs/cgroup/systemd/libpod_parent/libpod-37b8a3809de3d2f6e3d53c5bc4474bada175ae5b35d892121f759102ec0e0ffa"
goroutine 1 [running]: github.com/opencontainers/runc/libcontainer.getUnifiedPath(0x400023d260, 0x55831f6400, 0x0) github.com/opencontainers/runc/libcontainer/factory_linux.go:59 +0x2cc github.com/opencontainers/runc/libcontainer.cgroupfs2.func1(0x40001f7f10, 0x400023d260, 0x4a, 0x7fdef50edf) github.com/opencontainers/runc/libcontainer/factory_linux.go:111 +0x30 github.com/opencontainers/runc/libcontainer.(LinuxFactory).Load(0x40001ca120, 0x7fdef50edf, 0x40, 0x0, 0x0, 0x1, 0x8) github.com/opencontainers/runc/libcontainer/factory_linux.go:321 +0x14c main.getContainer(0x40000de6e0, 0x5583206e78, 0x8, 0x1, 0x40001e9338) github.com/opencontainers/runc/utils_linux.go:89 +0x9c main.execProcess(0x40000de6e0, 0x0, 0x0, 0x0) github.com/opencontainers/runc/exec.go:114 +0x30 main.glob..func5(0x40000de6e0, 0x558362a3c0, 0x40001e94f8) github.com/opencontainers/runc/exec.go:104 +0x6c github.com/urfave/cli.HandleAction(0x55832f7d60, 0x55833a4bc8, 0x40000de6e0, 0x40000de6e0, 0x0) github.com/urfave/cli@v1.22.1/app.go:523 +0x124 github.com/urfave/cli.Command.Run(0x5583202f3e, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x558321ab5a, 0x28, 0x0, ...) github.com/urfave/cli@v1.22.1/command.go:174 +0x408 github.com/urfave/cli.(App).Run(0x40000fc700, 0x40000c2000, 0xe, 0xe, 0x0, 0x0) github.com/urfave/cli@v1.22.1/app.go:276 +0x588 main.main() github.com/opencontainers/runc/main.go:163 +0xa78 [UDM] root@udmp.justfly.live:/mnt/data_ext/on_boot.d#
any ideas of how to fix im removing it for the time being until someone has an idea whats the cause