Closed hurricanehrndz closed 4 years ago
systemctl status user@1000.service
● user@1000.service - User Manager for UID 1000
Loaded: loaded (/lib/systemd/system/user@.service; static; vendor preset: enabled)
Drop-In: /lib/systemd/system/user@.service.d
└─timeout.conf
Active: active (running) since Sun 2019-10-27 19:31:51 MDT; 1h 25min ago
Main PID: 1636 (systemd)
Status: "Startup finished in 218ms."
Tasks: 71
CGroup: /user.slice/user-1000.slice/user@1000.service
├─at-spi-dbus-bus.service
│ ├─2297 /usr/lib/at-spi2-core/at-spi-bus-launcher
│ ├─2310 /usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address 3
│ └─7225 /usr/lib/at-spi2-core/at-spi2-registryd --use-gnome-session
├─dbus.service
│ ├─2098 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
│ ├─2253 /usr/lib/x86_64-linux-gnu/xfce4/xfconf/xfconfd
│ ├─2357 /usr/lib/dconf/dconf-service
│ └─2368 /usr/lib/x86_64-linux-gnu/gconf/gconfd-2
├─gpg-agent.service
│ ├─1699 /usr/bin/gpg-agent --supervised
│ └─5519 scdaemon --multi-server
├─gvfs-afc-volume-monitor.service
│ └─2466 /usr/lib/gvfs/gvfs-afc-volume-monitor
├─gvfs-daemon.service
│ ├─2340 /usr/lib/gvfs/gvfsd
│ └─2345 /usr/lib/gvfs/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes
├─gvfs-goa-volume-monitor.service
│ └─2460 /usr/lib/gvfs/gvfs-goa-volume-monitor
├─gvfs-gphoto2-volume-monitor.service
│ └─2471 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor
├─gvfs-mtp-volume-monitor.service
│ └─2456 /usr/lib/gvfs/gvfs-mtp-volume-monitor
├─gvfs-udisks2-volume-monitor.service
│ └─2417 /usr/lib/gvfs/gvfs-udisks2-volume-monitor
├─indicator-application.service
│ └─7309 /usr/lib/x86_64-linux-gnu/indicator-application/indicator-application-service
├─indicator-messages.service
│ └─7310 /usr/lib/x86_64-linux-gnu/indicator-messages/indicator-messages-service
├─init.scope
│ ├─1636 /lib/systemd/systemd --user
│ └─1641 (sd-pam)
├─obex.service
│ └─2548 /usr/lib/bluetooth/obexd
├─xfce4-notifyd.service
│ └─7254 /usr/lib/x86_64-linux-gnu/xfce4/notifyd/xfce4-notifyd
├─zeitgeist-fts.service
│ └─2435 /usr/lib/zeitgeist/zeitgeist/zeitgeist-fts
└─zeitgeist.service
├─2424 /usr/bin/zeitgeist-daemon
└─2434 zeitgeist-datahub
Oct 27 20:00:27 XPS9360 dbus-daemon[2098]: [session uid=1000 pid=2098] Successfully activated service 'org.freedesktop.Notifications'
Oct 27 20:00:27 XPS9360 systemd[1636]: Started XFCE notifications service.
Oct 27 20:00:27 XPS9360 org.freedesktop.thumbnails.Thumbnailer1[2098]: Registered thumbailer gnome-thumbnail-font --size %s %u %o
Oct 27 20:00:27 XPS9360 org.freedesktop.thumbnails.Thumbnailer1[2098]: Registered thumbailer atril-thumbnailer -s %s %u %o
Oct 27 20:00:27 XPS9360 org.freedesktop.thumbnails.Thumbnailer1[2098]: Registered thumbailer /usr/bin/gdk-pixbuf-thumbnailer -s %s %u %o
Oct 27 20:00:27 XPS9360 org.freedesktop.thumbnails.Thumbnailer1[2098]: Registered thumbailer /usr/bin/gdk-pixbuf-thumbnailer -s %s %u %o
Oct 27 20:00:28 XPS9360 dbus-daemon[2098]: [session uid=1000 pid=2098] Successfully activated service 'org.freedesktop.thumbnails.Thumbnailer1'
Oct 27 20:00:28 XPS9360 systemd[1636]: Started Indicator Application Service.
Oct 27 20:00:28 XPS9360 systemd[1636]: Started Indicator Messages Service.
Oct 27 20:03:55 XPS9360 gpg-agent[1699]: DBG: detected card with S/N D2760001240102010006041544460000
systemd-cgls output
Control group /:
-.slice
├─722 bpfilter_umh
├─user.slice
│ └─user-1000.slice
│ ├─user@1000.service
│ │ ├─gvfs-goa-volume-monitor.service
│ │ │ └─2460 /usr/lib/gvfs/gvfs-goa-volume-monitor
│ │ ├─zeitgeist.service
│ │ │ ├─2424 /usr/bin/zeitgeist-daemon
│ │ │ └─2434 zeitgeist-datahub
│ │ ├─gvfs-daemon.service
│ │ │ ├─2340 /usr/lib/gvfs/gvfsd
│ │ │ └─2345 /usr/lib/gvfs/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes
│ │ ├─gvfs-udisks2-volume-monitor.service
│ │ │ └─2417 /usr/lib/gvfs/gvfs-udisks2-volume-monitor
│ │ ├─xfce4-notifyd.service
│ │ │ └─7254 /usr/lib/x86_64-linux-gnu/xfce4/notifyd/xfce4-notifyd
│ │ ├─init.scope
│ │ │ ├─1636 /lib/systemd/systemd --user
│ │ │ └─1641 (sd-pam)
│ │ ├─gpg-agent.service
│ │ │ ├─1699 /usr/bin/gpg-agent --supervised
│ │ │ └─5519 scdaemon --multi-server
│ │ ├─zeitgeist-fts.service
│ │ │ └─2435 /usr/lib/zeitgeist/zeitgeist/zeitgeist-fts
│ │ ├─gvfs-gphoto2-volume-monitor.service
│ │ │ └─2471 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor
│ │ ├─obex.service
│ │ │ └─2548 /usr/lib/bluetooth/obexd
│ │ ├─at-spi-dbus-bus.service
│ │ │ ├─2297 /usr/lib/at-spi2-core/at-spi-bus-launcher
│ │ │ ├─2310 /usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2...
│ │ │ └─7225 /usr/lib/at-spi2-core/at-spi2-registryd --use-gnome-session
│ │ ├─indicator-messages.service
│ │ │ └─7310 /usr/lib/x86_64-linux-gnu/indicator-messages/indicator-message...
│ │ ├─indicator-application.service
│ │ │ └─7309 /usr/lib/x86_64-linux-gnu/indicator-application/indicator-appl...
│ │ ├─dbus.service
│ │ │ ├─2098 /usr/bin/dbus-daemon --session --address=systemd: --nofork --n...
│ │ │ ├─2253 /usr/lib/x86_64-linux-gnu/xfce4/xfconf/xfconfd
│ │ │ ├─2357 /usr/lib/dconf/dconf-service
│ │ │ └─2368 /usr/lib/x86_64-linux-gnu/gconf/gconfd-2
│ │ ├─gvfs-mtp-volume-monitor.service
│ │ │ └─2456 /usr/lib/gvfs/gvfs-mtp-volume-monitor
│ │ └─gvfs-afc-volume-monitor.service
│ │ └─2466 /usr/lib/gvfs/gvfs-afc-volume-monitor
│ ├─session-c4.scope
│ │ ├─ 7039 lightdm --session-child 12 19
│ │ ├─ 7050 /usr/bin/gnome-keyring-daemon --daemonize --login
│ │ ├─ 7054 /bin/sh /etc/xdg/xfce4/xinitrc -- /etc/X11/xinit/xserverrc
│ │ ├─ 7178 xfce4-session
│ │ ├─ 7186 xfwm4 --replace
│ │ ├─ 7190 xfce4-panel
│ │ ├─ 7192 Thunar --daemon
│ │ ├─ 7194 xfsettingsd
│ │ ├─ 7195 xfdesktop
│ │ ├─ 7197 /usr/bin/python3 /usr/bin/ulauncher --hide-window --hide-window
│ │ ├─ 7198 /usr/bin/python2 /usr/bin/dockx
│ │ ├─ 7207 /usr/bin/xcape -e Super_L Alt_L F1 Super_R Alt_L F1
│ │ ├─ 7212 /usr/bin/python3 /usr/bin/blueman-applet
│ │ ├─ 7213 /usr/bin/python3 /usr/share/system-config-printer/applet.py
│ │ ├─ 7216 xfce4-power-manager
│ │ ├─ 7217 /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1
│ │ ├─ 7221 compton -b
│ │ ├─ 7230 nm-applet
│ │ ├─ 7241 light-locker
│ │ ├─ 7265 /usr/lib/x86_64-linux-gnu/xfce4/panel/wrapper-2.0 /usr/lib/x86_...
│ │ ├─ 7268 /usr/lib/x86_64-linux-gnu/xfce4/panel/wrapper-1.0 /usr/lib/x86_...
│ │ ├─ 7270 /usr/lib/x86_64-linux-gnu/xfce4/panel/wrapper-2.0 /usr/lib/x86_...
│ │ ├─ 7273 /usr/lib/x86_64-linux-gnu/xfce4/panel/wrapper-2.0 /usr/lib/x86_...
│ │ ├─ 7279 /usr/lib/x86_64-linux-gnu/xfce4/panel/wrapper-2.0 /usr/lib/x86_...
│ │ ├─ 7285 /usr/lib/x86_64-linux-gnu/xfce4/panel/wrapper-2.0 /usr/lib/x86_...
│ │ ├─ 7423 /bin/sh -c tilix
│ │ ├─ 7424 tilix
│ │ ├─ 7590 /bin/zsh
│ │ ├─ 8792 /bin/zsh
│ │ ├─ 9203 ssh: /home/hurricanehrndz/.ssh/master-hurricanehrndz@ryzen-dev-...
│ │ ├─ 9576 /bin/sh -c firefox
│ │ ├─ 9577 /usr/lib/firefox/firefox
│ │ ├─ 9698 /usr/lib/firefox/firefox -contentproc -childID 2 -isForBrowser...
│ │ ├─ 9757 /home/hurricanehrndz/.mozilla/native-messaging-hosts/nplastpass...
│ │ ├─ 9769 /usr/lib/firefox/firefox -contentproc -childID 3 -isForBrowser...
│ │ ├─10028 /usr/lib/firefox/firefox -contentproc -childID 5 -isForBrowser...
│ │ ├─10588 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser...
│ │ ├─10670 /usr/lib/virtualbox/VirtualBox
│ │ ├─10688 /usr/lib/virtualbox/VBoxXPCOMIPCD
│ │ ├─10693 /usr/lib/virtualbox/VBoxSVC --auto-shutdown
│ │ ├─10826 /usr/lib/firefox/firefox -contentproc -childID 9 -isForBrowser...
│ │ ├─13548 /usr/lib/firefox/firefox -contentproc -childID 12 -isForBrowser...
│ │ ├─13740 /usr/lib/firefox/firefox -contentproc -childID 13 -isForBrowser...
│ │ ├─13890 ssh hurricanehrndz@ryzen-dev-vm01
│ │ ├─15625 /usr/lib/firefox/firefox -contentproc -childID 14 -isForBrowser...
│ │ ├─15706 /usr/lib/firefox/firefox -contentproc -childID 15 -isForBrowser...
│ │ ├─15758 /bin/zsh
│ │ ├─17212 systemd-cgls
│ │ └─17213 xclip -i -selection clipboard
│ └─session-c2.scope
│ ├─2287 /usr/bin/python3 /usr/share/system-config-printer/applet.py
│ ├─2376 /usr/bin/pulseaudio --start --log-target=syslog
│ └─2944 podman
├─init.scope
│ └─1 /sbin/init splash
└─system.slice
├─irqbalance.service
│ └─1497 /usr/sbin/irqbalance --foreground
├─packagekit.service
│ └─12923 /usr/lib/packagekit/packagekitd
├─systemd-udevd.service
│ └─845 /lib/systemd/systemd-udevd
├─whoopsie.service
│ └─1998 /usr/bin/whoopsie -f
├─cron.service
│ └─1498 /usr/sbin/cron -f
├─nfs-mountd.service
│ └─1568 /usr/sbin/rpc.mountd --manage-gids
├─thermald.service
│ └─1484 /usr/sbin/thermald --no-daemon --dbus-enable
├─polkit.service
│ └─1536 /usr/lib/policykit-1/polkitd --no-debug
├─networkd-dispatcher.service
│ └─1505 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
├─rtkit-daemon.service
│ └─2377 /usr/lib/rtkit/rtkit-daemon
├─bluetooth.service
│ └─1455 /usr/lib/bluetooth/bluetoothd
├─accounts-daemon.service
│ └─1460 /usr/lib/accountsservice/accounts-daemon
├─wpa_supplicant.service
│ └─1496 /sbin/wpa_supplicant -u -s -O /run/wpa_supplicant
├─libvirtd.service
│ └─1598 /usr/sbin/libvirtd
├─lightdm.service
│ ├─1614 /usr/sbin/lightdm
│ └─6913 /usr/lib/xorg/Xorg -core :0 -seat seat0 -auth /var/run/lightdm/roo...
├─ModemManager.service
│ └─1499 /usr/sbin/ModemManager --filter-policy=strict
├─vmware-USBArbitrator.service
│ └─1469 /usr/bin/vmware-usbarbitrator
├─systemd-journald.service
│ └─696 /lib/systemd/systemd-journald
├─unattended-upgrades.service
│ └─1637 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade...
├─ssh.service
│ └─1569 /usr/sbin/sshd -D
├─colord.service
│ └─2542 /usr/lib/colord/colord
├─NetworkManager.service
│ ├─1493 /usr/sbin/NetworkManager --no-daemon
│ ├─7343 /sbin/dhclient -d -q -sf /usr/lib/NetworkManager/nm-dhcp-helper -p...
│ └─7428 /sbin/dhclient -d -q -6 -N -sf /usr/lib/NetworkManager/nm-dhcp-hel...
├─snapd.service
│ └─1506 /usr/lib/snapd/snapd
├─uuidd.service
│ └─5104 /usr/sbin/uuidd --socket-activation
├─nfs-blkmap.service
│ └─723 /usr/sbin/blkmapd
├─rsyslog.service
│ └─1500 /usr/sbin/rsyslogd -n
├─rpcbind.service
│ └─1313 /sbin/rpcbind -f -w
├─kerneloops.service
│ ├─2011 /usr/sbin/kerneloops --test
│ └─2016 /usr/sbin/kerneloops
├─nfs-idmapd.service
│ └─1297 /usr/sbin/rpc.idmapd
├─teamviewerd.service
│ └─2022 /opt/teamviewer/tv_bin/teamviewerd -d
├─cups-browsed.service
│ └─1537 /usr/sbin/cups-browsed
├─lvm2-lvmetad.service
│ └─729 /sbin/lvmetad -f
├─cups.service
│ ├─1501 /usr/sbin/cupsd -l
│ └─2507 /usr/lib/cups/notifier/dbus dbus://
├─upower.service
│ └─1975 /usr/lib/upower/upowerd
├─systemd-resolved.service
│ └─1308 /lib/systemd/systemd-resolved
├─udisks2.service
│ └─1458 /usr/lib/udisks2/udisksd
├─acpid.service
│ └─1504 /usr/sbin/acpid
├─dbus.service
│ └─1485 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidf...
├─systemd-timesyncd.service
│ └─1312 /lib/systemd/systemd-timesyncd
├─system-getty.slice
│ └─getty@tty1.service
│ └─1682 /sbin/agetty -o -p -- \u --noclear tty1 linux
├─avahi-daemon.service
│ ├─1451 avahi-daemon: running [XPS9360.local]
│ └─1466 avahi-daemon: chroot helper
└─systemd-logind.service
└─1456 /lib/systemd/systemd-logind
while the container is in the failed state, can you please run from another terminal this command:
podman exec -l -ti cat /proc/self/cgroup
?
What is its output?
when running this
podman run -it --systemd=true --privileged --log-level=debug quay.io/samdoran/fedora30-ansible:latest
cat /proc/self/cgroup
I get the following log
[root@a4fa7cd73865 /]# cat /proc/self/cgroup
12:pids:/user.slice/user-1000.slice/session-c2.scope
11:memory:/user/hurricanehrndz/0/a4fa7cd7386568e2e4f8c5b83e75d52ad2e2a0de7152ceb7c6c5f4e763b9a231
10:perf_event:/
9:cpuset:/
8:freezer:/user/hurricanehrndz/0/a4fa7cd7386568e2e4f8c5b83e75d52ad2e2a0de7152ceb7c6c5f4e763b9a231
7:rdma:/
6:net_cls,net_prio:/
5:devices:/user.slice
4:hugetlb:/
3:cpu,cpuacct:/user.slice
2:blkio:/user.slice
1:name=systemd:/user.slice/user-1000.slice/session-c2.scope/a4fa7cd7386568e2e4f8c5b83e75d52ad2e2a0de7152ceb7c6c5f4e763b9a231
0::/user.slice/user-1000.slice/session-c2.scope
This is the output from the working system:
12:rdma:/
11:perf_event:/
10:freezer:/user/hurricanehrndz/0/c8b3b66129f68f191353b39eb88a2a7356008fc36984308ad5c5bff35ea3d47f
9:memory:/
8:cpuset:/
7:cpu,cpuacct:/
6:devices:/user.slice
5:net_cls,net_prio:/
4:hugetlb:/
3:blkio:/
2:pids:/user.slice/user-1000.slice/session-2.scope
1:name=systemd:/user.slice/user-1000.slice/session-2.scope/c8b3b66129f68f191353b39eb88a2a7356008fc36984308ad5c5bff35ea3d47f
0::/user.slice/user-1000.slice/session-2.scope
Seems like a similar issue to this: https://github.com/opencontainers/runc/issues/892
I would like to try and delete the user.slice cgroup from the cpu,cpuacct resources. I have tried cgdelete but that doesn't seem to work, any suggestions?
This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days.
Issue is still present. If more information is required to assist resolution please advise. Otherwise please document what permissions are exactly required with in the cgroup mounts.
By the way, the issue does encompass one solution, which was never confirmed adopted.
Adding libpam-cgfs as a requirement to the deb.
Are you using the systemd
cgroup driver, or cgroupfs
? It seems like it would make sense for this to be a requirement if cgroupfs
was in use, but if we're using systemd
I don't think it should be required.
Assuming it is cgroupfs
it seems painless enough to add a requirement to our PPA builds.
Are you using the
systemd
cgroup driver, orcgroupfs
? It seems like it would make sense for this to be a requirement ifcgroupfs
was in use, but if we're usingsystemd
I don't think it should be required.
Ubuntu seems to default cgroupfs. Both systems, working and non working run cgroupfs. Both systems report the cgroup version as 1.
@lsm5 Poke - can we get libpam-cgfs
added to the PPA builds as a dependency?
@mheon
If you want to close the ticket after that, I'm okay with it. Most people who have encountered this issue say it is resolved once you perform a system reinstall. I hate such solutions, but it seems like the only one in this case.
So I ran into this error, and I thought leaving this here would help.
Disclaimer: I'm new to cgroups
You need to place your user into a cgroup before you can use your container. It isn't related to this issue, but I thought those who'd come across this via google search could use this page to fix their cgroups. I used cgmanager
.
Here's the hacky solution listed on the page:
$ sudo cgm create all me
$ sudo cgm chown all me $(id -u) $(id -g)
$ sudo cgm movepid all me $$
The container stopped freezing after that.
@FlashDaggerX
As far as I can tell cgm has been deprecated. My knowledge and understanding of cgroup is novice at best as well
Interestingly enough podman seems to now run systemd container intermediately, see log below:
podman run -it --log-level debug --rm quay.io/samdoran/fedora30-ansible:latest
DEBU[0000] using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/hurricanehrndz/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/hurricanehrndz/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /home/hurricanehrndz/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/hurricanehrndz/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "vfs"
DEBU[0000] Initializing event backend journald
DEBU[0000] using runtime "/usr/sbin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
INFO[0000] running as rootless
DEBU[0000] using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/hurricanehrndz/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/hurricanehrndz/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /home/hurricanehrndz/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/hurricanehrndz/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Initializing event backend journald
DEBU[0000] using runtime "/usr/sbin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] parsed reference into "[vfs@/home/hurricanehrndz/.local/share/containers/storage+/run/user/1000]quay.io/samdoran/fedora30-ansible:latest"
DEBU[0000] parsed reference into "[vfs@/home/hurricanehrndz/.local/share/containers/storage+/run/user/1000]@dd7d735c48a518d3e1759691746fd074afef8cad787d9a70d5818946a46bc983"
DEBU[0000] parsed reference into "[vfs@/home/hurricanehrndz/.local/share/containers/storage+/run/user/1000]@dd7d735c48a518d3e1759691746fd074afef8cad787d9a70d5818946a46bc983"
DEBU[0000] parsed reference into "[vfs@/home/hurricanehrndz/.local/share/containers/storage+/run/user/1000]@dd7d735c48a518d3e1759691746fd074afef8cad787d9a70d5818946a46bc983"
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Using slirp4netns netmode
DEBU[0000] created OCI spec and options for new container
DEBU[0000] Allocated lock 0 for container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514
DEBU[0000] parsed reference into "[vfs@/home/hurricanehrndz/.local/share/containers/storage+/run/user/1000]@dd7d735c48a518d3e1759691746fd074afef8cad787d9a70d5818946a46bc983"
DEBU[0001] created container "3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514"
DEBU[0001] container "3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514" has work directory "/home/hurricanehrndz/.local/share/containers/storage/vfs-containers/3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514/userdata"
DEBU[0001] container "3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514" has run directory "/run/user/1000/vfs-containers/3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514/userdata"
DEBU[0001] Creating new volume 283a0e8b706d9ba17199ba4d746bc5725f629110073605f6db0794b53b5dfe83 for container
DEBU[0001] Validating options for local driver
DEBU[0001] Creating new volume eb64d6a7910f4a6f7263bb4cd7bdfd465354840ae76020770de8309c806c47e2 for container
DEBU[0001] Validating options for local driver
DEBU[0001] Creating new volume c43d23645fc083792de824c6d22eff9295807f2c042e000b512f5867705ade03 for container
DEBU[0001] Validating options for local driver
DEBU[0001] New container created "3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514"
DEBU[0001] container "3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514" has CgroupParent "/libpod_parent/libpod-3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514"
DEBU[0001] Handling terminal attach
DEBU[0001] Made network namespace at /run/user/1000/netns/cni-1d7494dc-2ffe-5745-6893-4854fbdbbb13 for container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514
DEBU[0001] mounted container "3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514" at "/home/hurricanehrndz/.local/share/containers/storage/vfs/dir/559270a429f5bf95d210cb02c62e6fefff5b8451565828b697c8f79a60c84e4a"
DEBU[0001] Copying up contents from container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514 to volume 283a0e8b706d9ba17199ba4d746bc5725f629110073605f6db0794b53b5dfe83
DEBU[0001] Creating dest directory: /home/hurricanehrndz/.local/share/containers/storage/volumes/283a0e8b706d9ba17199ba4d746bc5725f629110073605f6db0794b53b5dfe83/_data
DEBU[0001] Calling TarUntar(/home/hurricanehrndz/.local/share/containers/storage/vfs/dir/559270a429f5bf95d210cb02c62e6fefff5b8451565828b697c8f79a60c84e4a/run, /home/hurricanehrndz/.local/share/containers/storage/volumes/283a0e8b706d9ba17199ba4d746bc5725f629110073605f6db0794b53b5dfe83/_data)
DEBU[0001] TarUntar(/home/hurricanehrndz/.local/share/containers/storage/vfs/dir/559270a429f5bf95d210cb02c62e6fefff5b8451565828b697c8f79a60c84e4a/run /home/hurricanehrndz/.local/share/containers/storage/volumes/283a0e8b706d9ba17199ba4d746bc5725f629110073605f6db0794b53b5dfe83/_data)
DEBU[0001] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-1d7494dc-2ffe-5745-6893-4854fbdbbb13 tap0
DEBU[0001] Copying up contents from container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514 to volume eb64d6a7910f4a6f7263bb4cd7bdfd465354840ae76020770de8309c806c47e2
DEBU[0001] Copying up contents from container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514 to volume c43d23645fc083792de824c6d22eff9295807f2c042e000b512f5867705ade03
DEBU[0001] Creating dest directory: /home/hurricanehrndz/.local/share/containers/storage/volumes/c43d23645fc083792de824c6d22eff9295807f2c042e000b512f5867705ade03/_data
DEBU[0001] Calling TarUntar(/home/hurricanehrndz/.local/share/containers/storage/vfs/dir/559270a429f5bf95d210cb02c62e6fefff5b8451565828b697c8f79a60c84e4a/tmp, /home/hurricanehrndz/.local/share/containers/storage/volumes/c43d23645fc083792de824c6d22eff9295807f2c042e000b512f5867705ade03/_data)
DEBU[0001] TarUntar(/home/hurricanehrndz/.local/share/containers/storage/vfs/dir/559270a429f5bf95d210cb02c62e6fefff5b8451565828b697c8f79a60c84e4a/tmp /home/hurricanehrndz/.local/share/containers/storage/volumes/c43d23645fc083792de824c6d22eff9295807f2c042e000b512f5867705ade03/_data)
DEBU[0001] Created root filesystem for container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514 at /home/hurricanehrndz/.local/share/containers/storage/vfs/dir/559270a429f5bf95d210cb02c62e6fefff5b8451565828b697c8f79a60c84e4a
INFO[0001] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[0001] IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
DEBU[0001] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0001] Created OCI spec for container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514 at /home/hurricanehrndz/.local/share/containers/storage/vfs-containers/3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514/userdata/config.json
DEBU[0001] /usr/bin/conmon messages will be logged to syslog
DEBU[0001] running conmon: /usr/bin/conmon args="[--api-version 1 -c 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514 -u 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514 -r /usr/sbin/runc -b /home/hurricanehrndz/.local/share/containers/storage/vfs-containers/3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514/userdata -p /run/user/1000/vfs-containers/3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514/userdata/pidfile -l k8s-file:/home/hurricanehrndz/.local/share/containers/storage/vfs-containers/3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog -t --conmon-pidfile /run/user/1000/vfs-containers/3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/hurricanehrndz/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514]"
DEBU[0001] Received: 8752
INFO[0001] Got Conmon PID as 8738
DEBU[0001] Created container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514 in OCI runtime
DEBU[0001] Attaching to container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514
DEBU[0001] connecting to socket /run/user/1000/libpod/tmp/socket/3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514/attach
DEBU[0001] Starting container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514 with command [/usr/sbin/init]
DEBU[0001] Received a resize event: {Width:274 Height:63}
DEBU[0001] Started container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514
DEBU[0001] Enabling signal proxying
systemd v241-8.git9ef65cb.fc30 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
Detected virtualization docker.
Detected architecture x86-64.
Welcome to Fedora 30 (Container Image)!
Set hostname to <3181f60a0e17>.
Initializing machine ID from random generator.
Cannot determine cgroup we are running in: No medium found
Failed to allocate manager object: No medium found
[!!!!!!] Failed to allocate manager object.
Exiting PID 1...
DEBU[0001] Cleaning up container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514
DEBU[0001] Tearing down network namespace at /run/user/1000/netns/cni-1d7494dc-2ffe-5745-6893-4854fbdbbb13 for container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514
DEBU[0001] unmounted container "3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514"
DEBU[0001] Successfully cleaned up container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514
DEBU[0001] Container 3181f60a0e1775bcde5e6df9146d7470eb613c34ea59d8ffb6b72f20f5c03514 storage is already unmounted, skipping...
DEBU[0001] [graphdriver] trying provided driver "vfs"
in ~
podman run -it --log-level debug --rm quay.io/samdoran/fedora30-ansible:latest
DEBU[0000] using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/hurricanehrndz/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/hurricanehrndz/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /home/hurricanehrndz/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/hurricanehrndz/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "vfs"
DEBU[0000] Initializing event backend journald
DEBU[0000] using runtime "/usr/sbin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
INFO[0000] running as rootless
DEBU[0000] using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/hurricanehrndz/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/hurricanehrndz/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /home/hurricanehrndz/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/hurricanehrndz/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Initializing event backend journald
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] using runtime "/usr/sbin/runc"
DEBU[0000] parsed reference into "[vfs@/home/hurricanehrndz/.local/share/containers/storage+/run/user/1000]quay.io/samdoran/fedora30-ansible:latest"
DEBU[0000] parsed reference into "[vfs@/home/hurricanehrndz/.local/share/containers/storage+/run/user/1000]@dd7d735c48a518d3e1759691746fd074afef8cad787d9a70d5818946a46bc983"
DEBU[0000] parsed reference into "[vfs@/home/hurricanehrndz/.local/share/containers/storage+/run/user/1000]@dd7d735c48a518d3e1759691746fd074afef8cad787d9a70d5818946a46bc983"
DEBU[0000] parsed reference into "[vfs@/home/hurricanehrndz/.local/share/containers/storage+/run/user/1000]@dd7d735c48a518d3e1759691746fd074afef8cad787d9a70d5818946a46bc983"
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Using slirp4netns netmode
DEBU[0000] created OCI spec and options for new container
DEBU[0000] Allocated lock 0 for container 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f
DEBU[0000] parsed reference into "[vfs@/home/hurricanehrndz/.local/share/containers/storage+/run/user/1000]@dd7d735c48a518d3e1759691746fd074afef8cad787d9a70d5818946a46bc983"
DEBU[0001] created container "25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f"
DEBU[0001] container "25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f" has work directory "/home/hurricanehrndz/.local/share/containers/storage/vfs-containers/25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f/userdata"
DEBU[0001] container "25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f" has run directory "/run/user/1000/vfs-containers/25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f/userdata"
DEBU[0001] Creating new volume 0c2e5d077b7fe295522085ff0b95758380b8a514efd6a32420448f0d89f35d5a for container
DEBU[0001] Validating options for local driver
DEBU[0001] Creating new volume eb28b9ba5d9ca66bf0e198149f68e6b2c5282c8baaea48629ce813c39db0900f for container
DEBU[0001] Validating options for local driver
DEBU[0001] Creating new volume f86f1ad4fa8799fa80df1f46f7289ab972401d972dfbb468acbd5824774787a5 for container
DEBU[0001] Validating options for local driver
DEBU[0001] New container created "25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f"
DEBU[0001] container "25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f" has CgroupParent "/libpod_parent/libpod-25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f"
DEBU[0001] Handling terminal attach
DEBU[0001] Made network namespace at /run/user/1000/netns/cni-0b7bfd80-fe94-521d-ec48-8004e8dc9032 for container 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f
DEBU[0001] mounted container "25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f" at "/home/hurricanehrndz/.local/share/containers/storage/vfs/dir/f9804ab731f4d21344383e15b343813728722d39378fa8fcf02a375bb375fd6a"
DEBU[0001] Copying up contents from container 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f to volume 0c2e5d077b7fe295522085ff0b95758380b8a514efd6a32420448f0d89f35d5a
DEBU[0001] Creating dest directory: /home/hurricanehrndz/.local/share/containers/storage/volumes/0c2e5d077b7fe295522085ff0b95758380b8a514efd6a32420448f0d89f35d5a/_data
DEBU[0001] Calling TarUntar(/home/hurricanehrndz/.local/share/containers/storage/vfs/dir/f9804ab731f4d21344383e15b343813728722d39378fa8fcf02a375bb375fd6a/tmp, /home/hurricanehrndz/.local/share/containers/storage/volumes/0c2e5d077b7fe295522085ff0b95758380b8a514efd6a32420448f0d89f35d5a/_data)
DEBU[0001] TarUntar(/home/hurricanehrndz/.local/share/containers/storage/vfs/dir/f9804ab731f4d21344383e15b343813728722d39378fa8fcf02a375bb375fd6a/tmp /home/hurricanehrndz/.local/share/containers/storage/volumes/0c2e5d077b7fe295522085ff0b95758380b8a514efd6a32420448f0d89f35d5a/_data)
DEBU[0001] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-0b7bfd80-fe94-521d-ec48-8004e8dc9032 tap0
DEBU[0001] Copying up contents from container 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f to volume eb28b9ba5d9ca66bf0e198149f68e6b2c5282c8baaea48629ce813c39db0900f
DEBU[0001] Creating dest directory: /home/hurricanehrndz/.local/share/containers/storage/volumes/eb28b9ba5d9ca66bf0e198149f68e6b2c5282c8baaea48629ce813c39db0900f/_data
DEBU[0001] Calling TarUntar(/home/hurricanehrndz/.local/share/containers/storage/vfs/dir/f9804ab731f4d21344383e15b343813728722d39378fa8fcf02a375bb375fd6a/run, /home/hurricanehrndz/.local/share/containers/storage/volumes/eb28b9ba5d9ca66bf0e198149f68e6b2c5282c8baaea48629ce813c39db0900f/_data)
DEBU[0001] TarUntar(/home/hurricanehrndz/.local/share/containers/storage/vfs/dir/f9804ab731f4d21344383e15b343813728722d39378fa8fcf02a375bb375fd6a/run /home/hurricanehrndz/.local/share/containers/storage/volumes/eb28b9ba5d9ca66bf0e198149f68e6b2c5282c8baaea48629ce813c39db0900f/_data)
DEBU[0001] Copying up contents from container 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f to volume f86f1ad4fa8799fa80df1f46f7289ab972401d972dfbb468acbd5824774787a5
DEBU[0001] Created root filesystem for container 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f at /home/hurricanehrndz/.local/share/containers/storage/vfs/dir/f9804ab731f4d21344383e15b343813728722d39378fa8fcf02a375bb375fd6a
INFO[0001] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[0001] IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
DEBU[0001] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0001] Created OCI spec for container 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f at /home/hurricanehrndz/.local/share/containers/storage/vfs-containers/25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f/userdata/config.json
DEBU[0001] /usr/bin/conmon messages will be logged to syslog
DEBU[0001] running conmon: /usr/bin/conmon args="[--api-version 1 -c 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f -u 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f -r /usr/sbin/runc -b /home/hurricanehrndz/.local/share/containers/storage/vfs-containers/25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f/userdata -p /run/user/1000/vfs-containers/25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f/userdata/pidfile -l k8s-file:/home/hurricanehrndz/.local/share/containers/storage/vfs-containers/25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog -t --conmon-pidfile /run/user/1000/vfs-containers/25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/hurricanehrndz/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f]"
DEBU[0001] Received: 8886
INFO[0001] Got Conmon PID as 8872
DEBU[0001] Created container 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f in OCI runtime
DEBU[0001] Attaching to container 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f
DEBU[0001] connecting to socket /run/user/1000/libpod/tmp/socket/25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f/attach
DEBU[0001] Starting container 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f with command [/usr/sbin/init]
DEBU[0001] Received a resize event: {Width:274 Height:63}
DEBU[0001] Started container 25af2c78f5f8c107b2bed7cd8351ee07a90bb7cec889522bd00bcd8f638aee1f
DEBU[0001] Enabling signal proxying
systemd v241-8.git9ef65cb.fc30 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
Detected virtualization docker.
Detected architecture x86-64.
Welcome to Fedora 30 (Container Image)!
Set hostname to <25af2c78f5f8>.
Initializing machine ID from random generator.
File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ OK ] Reached target Slices.
[ OK ] Listening on Journal Socket.
[ OK ] Reached target Paths.
[ OK ] Reached target Swap.
[ OK ] Listening on Journal Socket (/dev/log).
[ OK ] Reached target Sockets.
[ OK ] Listening on Process Core Dump Socket.
[ OK ] Reached target Local File Systems.
Starting Create Volatile Files and Directories...
Starting Journal Service...
[ OK ] Started Create Volatile Files and Directories.
[ OK ] Started Journal Service.
[ OK ] Reached target System Initialization.
[ OK ] Started Daily Cleanup of Temporary Directories.
[ OK ] Reached target Timers.
[ OK ] Reached target Basic System.
[ OK ] Reached target Multi-User System.
I am going to close the issue due to the aforementioned solutions above.
Note that the PPA has been deprecated in favor of OBS. You can follow the updated install instructions here: https://github.com/containers/libpod/blob/master/install.md#ubuntu
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Running systemd enable containers is inconsistent across Ubuntu 18.04 systems. First of all podman requires
libpam-cgfs
which the podman package does not have as a requirement.Steps to reproduce the issue:
podman run -it --systemd=true --privileged --log-level=debug quay.io/samdoran/fedora30-ansible:latest
Describe the results you received:
Describe the results you expected:
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Working box is VM on libvirt, non working is physical machine.