Closed X-dark closed 3 years ago
@X-dark I am using the same configuration cgroupManager: systemd
and cgroupVersion: v2
but I am unable to reproduce. Could you tell me values of following or make sure they are equivalent to:
systemd.unified_cgroup_hierarchy=1
if you are using grub it should be defined in /etc/default/grub
cat /sys/fs/cgroup/cgroup.subtree_control
echo +cpuset > /sys/fs/cgroup/cgroup.subtree_control
and echo +cpuset > /sys/fs/cgroup/cgroup.controllers
?cpuset
is already there try echo +cpu > /sys/fs/cgroup/cgroup.subtree_control
and echo +cpu > /sys/fs/cgroup/cgroup.controllers
root
user ? using sudo
@X-dark afaik following controllers should be enabled for rootless users cpu cpuset io memory pids
to work with v2
but not sure please try adding these to /sys/fs/cgroup/cgroup.subtree_control
and /sys/fs/cgroup/cgroup.controllers
using ( from priviledged or root user ) echo +cpu +cpuset +io +memory +pids > /sys/fs/cgroup/cgroup.subtree_control
user.slice
as well instead of parent i.e /sys/fs/cgroup/user.slice/cgroup.controllers
and /sys/fs/cgroup/user.slice/cgroup.subtree_control
@X-dark I am using the same configuration
cgroupManager: systemd
andcgroupVersion: v2
but I am unable to reproduce. Could you tell me values of following or make sure they are equivalent to:
systemd.unified_cgroup_hierarchy=1
if you are using grub it should be defined in/etc/default/grub
I have not this option on my Grub. But I understood it should be the default and only needed with =0 to revert to cgroupv1. Any way to check?
cat /sys/fs/cgroup/cgroup.subtree_control
memory hugetlb pids rdma
- What happens when you do
echo +cpuset > /sys/fs/cgroup/cgroup.subtree_control
andecho +cpuset > /sys/fs/cgroup/cgroup.controllers
?- or if
cpuset
is already there tryecho +cpu > /sys/fs/cgroup/cgroup.subtree_control
andecho +cpu > /sys/fs/cgroup/cgroup.controllers
Seems to be accepted. After that I have
cpuset cpu memory hugetlb pids rdma
The container is still not running though.
- What happens if you run with
root
user ? usingsudo
All of that was done as root or with sudo. However, I just tried rootless and with that it seems I have no issue (did not try before adding cpu/cpuset above though).
Actually, launching a container as root makes the cpu, cpuset flags disappear after the error.
Rootless, works fine without the flags being there first.
@X-dark so adding cpu cpuset
solves for you ?
No, rootless does.
Adding cpu cpuset
makes no effect and get removed once I get the error.
root containers are still failing.
@X-dark Could you also please paste output for cat /proc/self/cgroup
and cat /proc/cgroups
Downstream bugreport: https://bugs.archlinux.org/task/71560
I can't reproduce either so there is a user issue somewhere I believe.
0::/user.slice/user-1000.slice/session-5.scope
#subsys_name hierarchy num_cgroups enabled
cpuset 0 158 1
cpu 0 158 1
cpuacct 0 158 1
blkio 0 158 1
memory 0 158 1
devices 0 158 1
freezer 0 158 1
net_cls 0 158 1
perf_event 0 158 1
net_prio 0 158 1
hugetlb 0 158 1
pids 0 158 1
rdma 0 158 1
@Foxboron it could be, but did not remember of any customization I may have done.
@Foxboron thanks for confirming at your end.
@X-dark Just to keep things on page
rootless
root
and cpu cpuset
are automatically removed from cgroup.subtree_control
as soon as you invoke crun from root user ?@flouthoc that is a good summary, yes
@X-dark does it works fine if you use sudo <non-root>
?
sudo -u cedric podman run docker.io/alpine:latest ls
works fine
sudo podman run docker.io/alpine:latest ls
fails
@X-dark try with this edit
sudo vi /etc/default/grub
and add following line GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=1
sudo update-grub
/boot/cmdline.txt
and add cgroup_enable=cpu cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
Restart system after these three step and then check. I am not sure but problem could also be related with RT process as stated here https://github.com/lxc/lxc/issues/3545#issue-714101025
I am not sure but problem could also be related with RT process as stated here lxc/lxc#3545 (comment)
Good catch. I stopped mpd which is running as realtime and it started to work as root (without any other change nor reboot).
@X-dark cheers!!! @giuseppe do you think we can document this into crun
manuals ?
Yes, definitely it is something we can document
@Foxboron @X-dark Could you guys close this issue if it is resolved ? Also i think downstream can be closed https://bugs.archlinux.org/task/71560 . @X-dark feel free to raise a PR into crun manuals otherwise i'll raise it myself.
@flouthoc Yep, thanks for the help!
PR opened. Closing this issue. Thanks @flouthoc
I just hit this on my home system after a reboot: could not run podman (4.3.1) as root, with the abovementioned error. Turns out, I had started mpd
before running podman. It cost me a lot of time to track down this issue (and, in fact, killing mpd
fixed the problem. I was able to start mpd
later, after having run podman as root).
I think a friendly error message (podman, crun, whatever) would be much, much appreciated.
I am running into similar issue when I reboot my system. Here are some details. But mpd is not even installed on the system.
podman start of the container fails after the system reboots. Not able to figure out why.
podman start <container> throws this
Error: OCI runtime error: unable to start container "15b6e875dc79d0bdc6976347a2c0e20c28ef58b4e07396434502f7224875a028": writing file \/sys/fs/cgroup/cgroup.subtree_control`: Invalid argument`
Observations: Before reboot if I look at cgroup.subtree_control file, the contents would be
cat /sys/fs/cgroup/cgroup.subtree_control
cpuset cpu io memory hugetlb pids rdma misc
After reboot, I see cpuset missing
cat /sys/fs/cgroup/cgroup.subtree_control
cpu io memory hugetlb pids rdma misc <<< notice cpuset gone missing.
When I try to write this, it fails
echo +cpuset > /sys/fs/cgroup/cgroup.subtree_control
-bash: echo: write error: Invalid argument
Also the mounts before and after Before:
mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
After:
mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
I even included GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=1"
in the grub and rebooted. But no luck.
I am out of ideas at this point. Any help would be greatly appreciated.
this could be the reason: https://lore.kernel.org/io-uring/CA+wXwBQwgxB3_UphSny-yAP5b26meeOu1W4TwYVcD_+5gOhvPw@mail.gmail.com/
this could be the reason: https://lore.kernel.org/io-uring/CA+wXwBQwgxB3_UphSny-yAP5b26meeOu1W4TwYVcD_+5gOhvPw@mail.gmail.com/
Did you get any reply on this ? what was the workaround ?
I have not reported it
ArchLinux recently switched the runtime for Podman from runc to crun. With the switch to crun, I cannot create any container. I get the following error:
Podman info
/sys/fs/cgroup content
trace run of podman
Click to expand
``` podman --log-level trace run docker.io/alpine:latest INFO[0000] podman filtering at log level trace DEBU[0000] Called run.PersistentPreRunE(podman --log-level trace run docker.io/alpine:latest) TRAC[0000] Reading configuration file "/usr/share/containers/containers.conf" DEBU[0000] Merged system config "/usr/share/containers/containers.conf" TRAC[0000] &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.38.11 Annotations:[] CgroupNS:private Cgroups:enabled DefaultCapabilities:[CHOWN DAC_OVERRIDE FOWNER FSETID KILL NET_BIND_SERVICE SETFCAP SETGID SETPCAP SETUID SYS_CHROOT] DefaultSysctls:[net.ipv4.ping_group_range=0 0] DefaultUlimits:[nproc=4194304:4194304] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:true Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:bridge NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile: ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{CgroupCheck:false CgroupManager:systemd ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/libpod/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageBuildFormat:oci ImageDefaultTransport:docker:// ImageParallelCopies:0 ImageDefaultFormat: InfraCommand: InfraImage:k8s.gcr.io/pause:3.5 InitPath:/usr/libexec/podman/catatonit LockType:shm MachineEnabled:false MultiImageArchive:false Namespace: NetworkCmdPath: NetworkCmdOptions:[] NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc] runsc:[/usr/bin/runsc /usr/sbin/runsc /usr/local/bin/runsc /usr/local/sbin/runsc /bin/runsc /sbin/runsc /run/current-system/sw/bin/runsc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc kata runsc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/var/lib/containers/storage/libpod StopTimeout:10 TmpDir:/run/libpod VolumePath:/var/lib/containers/storage/volumes VolumePlugins:map[]} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman DefaultSubnet:10.88.0.0/16 NetworkConfigDir:/etc/cni/net.d/}} TRAC[0000] Reading configuration file "/etc/containers/containers.conf" DEBU[0000] Merged system config "/etc/containers/containers.conf" TRAC[0000] &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.38.11 Annotations:[] CgroupNS:private Cgroups:enabled DefaultCapabilities:[CHOWN DAC_OVERRIDE FOWNER FSETID KILL NET_BIND_SERVICE SETFCAP SETGID SETPCAP SETUID SYS_CHROOT] DefaultSysctls:[net.ipv4.ping_group_range=0 0] DefaultUlimits:[nproc=4194304:4194304] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:true Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:bridge NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile: ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{CgroupCheck:false CgroupManager:systemd ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/libpod/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageBuildFormat:oci ImageDefaultTransport:docker:// ImageParallelCopies:0 ImageDefaultFormat: InfraCommand: InfraImage:k8s.gcr.io/pause:3.5 InitPath:/usr/libexec/podman/catatonit LockType:shm MachineEnabled:false MultiImageArchive:false Namespace: NetworkCmdPath: NetworkCmdOptions:[] NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc] runsc:[/usr/bin/runsc /usr/sbin/runsc /usr/local/bin/runsc /usr/local/sbin/runsc /bin/runsc /sbin/runsc /run/current-system/sw/bin/runsc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc kata runsc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/var/lib/containers/storage/libpod StopTimeout:10 TmpDir:/run/libpod VolumePath:/var/lib/containers/storage/volumes VolumePlugins:map[]} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman DefaultSubnet:10.88.0.0/16 NetworkConfigDir:/etc/cni/net.d/}} DEBU[0000] Using conmon: "/usr/bin/conmon" DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db DEBU[0000] Overriding run root "/run/containers/storage" with "/var/run/containers/storage" from database DEBU[0000] Overriding tmp dir "/run/libpod" with "/var/run/libpod" from database DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /var/lib/containers/storage DEBU[0000] Using run root /var/run/containers/storage DEBU[0000] Using static dir /var/lib/containers/storage/libpod DEBU[0000] Using tmp dir /var/run/libpod DEBU[0000] Using volume path /var/lib/containers/storage/volumes DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is being used DEBU[0000] cached value indicated that native-diff is not being used INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true DEBU[0000] Initializing event backend journald TRAC[0000] found runtime "" DEBU[0000] configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument DEBU[0000] Using OCI runtime "/usr/bin/crun" INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist DEBU[0000] Default CNI network name podman is unchangeable INFO[0000] Setting parallel job count to 13 DEBU[0000] Pulling image docker.io/alpine:latest (policy: missing) DEBU[0000] Looking up image "docker.io/alpine:latest" in local containers storage DEBU[0000] Trying "docker.io/alpine:latest" ... DEBU[0000] Trying "docker.io/library/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] Found image "docker.io/alpine:latest" as "docker.io/library/alpine:latest" in local containers storage DEBU[0000] Looking up image "docker.io/library/alpine:latest" in local containers storage DEBU[0000] Trying "docker.io/library/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] Found image "docker.io/library/alpine:latest" as "docker.io/library/alpine:latest" in local containers storage DEBU[0000] Looking up image "docker.io/alpine:latest" in local containers storage DEBU[0000] Trying "docker.io/alpine:latest" ... DEBU[0000] Trying "docker.io/library/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] Found image "docker.io/alpine:latest" as "docker.io/library/alpine:latest" in local containers storage DEBU[0000] Inspecting image d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83 DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] Looking up image "docker.io/alpine:latest" in local containers storage DEBU[0000] Trying "docker.io/alpine:latest" ... DEBU[0000] Trying "docker.io/library/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] Found image "docker.io/alpine:latest" as "docker.io/library/alpine:latest" in local containers storage DEBU[0000] Inspecting image d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83 DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] Inspecting image d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83 DEBU[0000] using systemd mode: false DEBU[0000] No hostname set; container's hostname will default to runtime default DEBU[0000] Loading seccomp profile from "/etc/containers/seccomp.json" DEBU[0000] Allocated lock 7 for container 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] exporting opaque data as blob "sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83" DEBU[0000] created container "99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba" DEBU[0000] container "99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba" has work directory "/var/lib/containers/storage/overlay-containers/99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba/userdata" DEBU[0000] container "99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba" has run directory "/var/run/containers/storage/overlay-containers/99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba/userdata" DEBU[0000] Not attaching to stdin DEBU[0000] Made network namespace at /run/netns/cni-0f1200c2-897c-f4cb-72f8-46aef3438827 for container 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is being used DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true DEBU[0000] overlay: mount_data=nodev,lowerdir=/var/lib/containers/storage/overlay/l/7UAIWVTHEN62XANJRDPLV24XWD,upperdir=/var/lib/containers/storage/overlay/3a07f591d89574dba11360c63b29afa637fceb5ffa957449c3a430b325e6c973/diff,workdir=/var/lib/containers/storage/overlay/3a07f591d89574dba11360c63b29afa637fceb5ffa957449c3a430b325e6c973/work DEBU[0000] mounted container "99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba" at "/var/lib/containers/storage/overlay/3a07f591d89574dba11360c63b29afa637fceb5ffa957449c3a430b325e6c973/merged" DEBU[0000] Created root filesystem for container 99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba at /var/lib/containers/storage/overlay/3a07f591d89574dba11360c63b29afa637fceb5ffa957449c3a430b325e6c973/merged INFO[0000] Got pod network &{Name:stoic_poitras Namespace:stoic_poitras ID:99133a02abff270a8c623d02584432fcf2a79075ab6ebf0c5993644171aba4ba NetNS:/run/netns/cni-0f1200c2-897c-f4cb-72f8-46aef3438827 Networks:[{Name:podman Ifname:eth0}] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[] Bandwidth: