Open urbenlegend opened 1 year ago
Thanks for reaching out, @urbenlegend.
@giuseppe, could this be a cleanup issue?
I can provide any additional logs as needed @vrothberg. I never know how to debug these slow systemd shutdown issues. All I see is libpod-conmon-<uuid>.scope
is causing issues and I have no idea how to identify which container is causing the issue or what its hanging on. Happy to provide additional info if you provide me with more debugging techniques!
Out of curiosity, could it be the health checks on my Jellyfin containers?
I think it is related to conmon ignoring SIGTERM.
And that is by design because it needs to wait for the container to terminate.
Is the container still running while conmon is running? Would it help if you tweak the stop timeout for the containers?
The containers are still running when I shutdown or reboot the computer. Shouldn't these containers be automatically turned off during shutdown? How do I check which container during shutdown?
How do I go about changing the stop timeout?
you can specify the stop timeout when you create them.
podman run --stop-timeout N
Is there a way to specify that via docker-compose, as that's what I am using to start these containers? The timeout by default is 10 seconds anyways and the delay is much longer than that, so I am not sure if that will solve anything.
Is there a way to figure out which container is causing this specifically? There's a long id in libpod-conmon-<some long id>.scope
. Can I use that id somehow to associate it with whatever container is hanging?
Also, this issue only occurs when the container has been running for a while. If I start the system and then restart after just a few minutes, it proceeds without delay.
Okay, I don't know why it took me so long to figure out that the long libpod-conmon ID is just simply the container ID. I've identified that the two hanging conmon services belong to my Jellyfin containers. So perhaps the way to reproduce this is to simply:
Here's the Jellyfin docker-compose I am using for reference:
version: "3.5"
services:
jellyfin:
image: jellyfin/jellyfin
user: 1000:1000
volumes:
- ./config:/config
- ./cache:/cache
- /nas/Multimedia:/media:ro
networks:
jellyfin:
aliases:
- jellyfin
devices:
- /dev/dri/renderD128:/dev/dri/renderD128
restart: always
networks:
jellyfin:
external: true
The container is running rootful via the podman daemon. /nas/Multimedia is a folder on my ZFS raidz pool.
Currently I am attempting to see whether disabling health checks on my Jellyfin containers will help with this issue. I've temporarily added
healthcheck:
disable: true
to my docker-compose.yml and will report back whether this helps solve the conmon shutdown delay.
Disabling health check had nothing to do with it. Seems like the Jellyfin containers are just preventing shutdown for some reason. This did not happen when I was using Docker.
@urbenlegend, can you share the journal logs? I wonder whether restart: always
may not play a role.
Sure I can definitely do that, but what journalctl command would you recommend to get you the best logs?
What I am interested in is figuring out whether the container is restarting on shutdown. So you may run sudo journalctl -r
and then look for podman restart events.
Journalctl logs from reboot initiation to system shutdown attached: shutdown_log.txt
Sep 05 00:59:47 systemd-logind[1156]: The system will reboot now! [...] Sep 05 00:59:50 systemd[1]: Stopped libpod-conmon-bcfae1eb7eeb0730b7d8b6e60e67fb0d88ef35dc8079f31acb928fa16556269a.scope.
The logs look good to me. Unless my eyes are tricked, it takes 3 seconds until the last container is stopped. That sounds like a perfectly reasonable time frame to me.
Strange, that reboot definitely hung for 90 seconds on one of the conmon scopes though.
Was is the very same reboot?
I started a reboot, it hung, then on the next boot I did a sudo journalctl -r --no-hostname -b -1 > log.txt
to access the logs for the previous boot.
@urbenlegend, I am unable to reproduce. Did you try without restart: always
?
I can only reproduce it after it has been running for a while. If I start up the machine and reboot immediately, this doesn't happen. You may have to let it run for a while before rebooting to get it to happen. Did you setup a Jellyfin media library as well? I am wondering if that has an impact here.
I'll do a test without restart: always
and report back.
Okay, I tested without restart: always
twice over the span of two days and could not reproduce the issue. Seems like restart: always
does have a hand in causing the delayed shutdown.
So far we've made the following observations:
restart: always
.It is still a puzzle to me, and I honestly do not know how to proceed. As pointed out in https://github.com/containers/podman/issues/19815#issuecomment-1706262957, the logs indicate that the containers are shut down within 3 seconds.
@giuseppe, in the logs I see a lot of "Stopping libcrun container...". Can we add the ID to the log for it to be more explicit?
Sep 05 00:59:47 systemd[1]: Stopping libcrun container...
Sep 05 00:59:47 systemd[1]: Stopping libcrun container...
Sep 05 00:59:47 systemd[1]: Stopping libcrun container...
Sep 05 00:59:47 jellyfin-jellyfin-1[14394]: [07:59:47] [INF] [2] Main: Running query planner optimizations in the database... This might take a while
Sep 05 00:59:47 jellyfin-jellyfin-1[14394]: [07:59:47] [INF] [2] Main: Received a SIGTERM signal, shutting down
Sep 05 00:59:47 nextcloud-db-1[2982]: 2023-09-05 7:59:47 0 [Note] InnoDB: FTS optimize thread exiting.
Sep 05 00:59:47 systemd[1]: Stopping libcrun container...
Sep 05 00:59:47 nextcloud-db-1[2982]: 2023-09-05 7:59:47 0 [Note] mariadbd (initiated by: unknown): Normal shutdown
Sep 05 00:59:47 systemd[1]: Stopping libcrun container...
Sep 05 00:59:47 systemd[1]: Stopping libcrun container...
Sep 05 00:59:47 systemd[1]: Stopping libcrun container...
One thing I do begin to see is that the container does not handle SIGTERM well leading to: tmp_jellyfin_1 exited with code 143`. That will cause restarts until systemd nukes it with SIGKILL.
One thing I noticed while digging into the issue: the container gets restarted even after a podman kill
. That is a Podman-only behavior. The container does not get restarted when using Docker.
That may explain the issue but needs more digging.
Yet, what I also see in the logs:
Sep 05 00:59:51 systemd[1]: podman-restart.service: Consumed 2.577s CPU time.
Sep 05 00:59:51 systemd[1]: Stopped Podman Start All Containers With Restart Policy Set To Always.
Sep 05 00:59:51 systemd[1]: podman-restart.service: Deactivated successfully.
So the podman-restart.service
does it's job correctly and stops the containers with restart=always
.
A friendly reminder that this issue had no activity for 30 days.
@vrothberg I see that a fix for this issue has been merged. Which version of podman will this fix be in? Currently running 4.7.1.
It should be in v4.7.1
If you get a chance to test it, please report back whether the issue got fixed with Podman 4.7 @urbenlegend.
I've been testing it since it released 3-4 days ago. Restarted yesterday and I haven't run into the issue, but it was a bit sporadic before so I'll continue to do more testing for this week and report back.
Reopen if it happens again.
I've been testing it since it released 3-4 days ago. Restarted yesterday and I haven't run into the issue, but it was a bit sporadic before so I'll continue to do more testing for this week and report back.
Thanks a ton for checking, @urbenlegend.
It's been about a week and I have not experienced the issue again. The fix worked! Thanks for the quick fix!
Excellent news. Thanks a lot for your help, @urbenlegend !
Can we reopen this issue? This is happening again in podman 4.7.2:
host:
arch: amd64
buildahVersion: 1.32.0
cgroupControllers:
- cpuset
- cpu
- io
- memory
- hugetlb
- pids
- rdma
- misc
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: /usr/bin/conmon is owned by conmon 1:2.1.8-1
path: /usr/bin/conmon
version: 'conmon version 2.1.8, commit: 00e08f4a9ca5420de733bf542b930ad58e1a7e7d'
cpuUtilization:
idlePercent: 99.31
systemPercent: 0.35
userPercent: 0.33
cpus: 12
databaseBackend: boltdb
distribution:
distribution: arch
version: unknown
eventLogger: journald
freeLocks: 2009
hostname: arch-nas
idMappings:
gidmap: null
uidmap: null
kernel: 6.5.9-arch2-1
linkmode: dynamic
logDriver: journald
memFree: 27080581120
memTotal: 32991137792
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: /usr/lib/podman/aardvark-dns is owned by aardvark-dns 1.8.0-1
path: /usr/lib/podman/aardvark-dns
version: aardvark-dns 1.8.0
package: /usr/lib/podman/netavark is owned by netavark 1.8.0-1
path: /usr/lib/podman/netavark
version: netavark 1.8.0
ociRuntime:
name: crun
package: /usr/bin/crun is owned by crun 1.11.1-1
path: /usr/bin/crun
version: |-
crun version 1.11.1
commit: 1084f9527c143699b593b44c23555fb3cc4ff2f3
rundir: /run/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
pasta:
executable: ""
package: ""
version: ""
remoteSocket:
exists: true
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /etc/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.2-1
version: |-
slirp4netns version 1.2.2
commit: 0ee2d87523e906518d34a6b423271e4826f71faf
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.4
swapFree: 17179865088
swapTotal: 17179865088
uptime: 0h 16m 12.00s
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries: {}
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 7
paused: 0
running: 7
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev
graphRoot: /var/lib/containers/storage
graphRootAllocated: 238342373376
graphRootUsed: 36242833408
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "false"
Supports d_type: "true"
Supports shifting: "true"
Supports volatile: "true"
Using metacopy: "true"
imageCopyTmpDir: /var/tmp
imageStore:
number: 24
runRoot: /run/containers/storage
transientStore: false
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 4.7.2
Built: 1698787144
BuiltTime: Tue Oct 31 14:19:04 2023
GitCommit: 750b4c3a7c31f6573350f0b3f1b787f26e0fe1e3-dirty
GoVersion: go1.21.3
Os: linux
OsArch: linux/amd64
Version: 4.7.2
Here's the logs on shutdown in reverse order: log_truncated.txt
I believe it got stuck on container 93e82863125a4c43314a0fc88011304eb41d58d62993d4a34dcea2e2b193c4e2, which I think is my Nextcloud app instance.
It would appear that this bug is much harder to reproduce. I haven't been able to get it to happen again on my machine on 5 different reboots. The previous fix definitely had a major impact, but there may be some tiny corner cases still remaining, or maybe there's something wrong with the container itself?
I've just encountered the same behavior on a reboot but slightly different settings for the container as it was set to restart unless-stopped instead of always. Podman v3.4.4, Ubuntu 22.04.3 LTS. Container was running LinkAce bookmark manager. One app container, one db (MariaDB). Logs show it restarting once a second for 90 seconds before finally sending a sigkill to all processes and completing the reboot. It had just over a week of runtime before the reboot. It has happened a few times now but I didnt previously dig into logs nor find this thread until today. I'll let it bake for a day and test again tomorrow to see how consistent the behavior is.
There is also another container on this host running PlantUML which was set to restart always and had no issues at all. Only LinkAce was in a restart loop which makes even less sense.
Let me know if there is anything I can provide to help diagnose the issue.
Dec 11 16:00:05 container-01 systemd[1]: Shutting down. Dec 11 16:00:05 container-01 systemd[1]: Reached target System Power Off. Dec 11 16:00:05 container-01 systemd[1]: Finished System Power Off. Dec 11 16:00:05 container-01 systemd[1]: systemd-poweroff.service: Deactivated successfully. Dec 11 16:00:05 container-01 systemd[1]: Reached target Late Shutdown Services. Dec 11 16:00:05 container-01 systemd[1]: Reached target System Shutdown. Dec 11 16:00:05 container-01 systemd[1]: machine.slice: Consumed 32min 11.341s CPU time. Dec 11 16:00:05 container-01 systemd[1]: Removed slice Virtual Machine and Container Slice. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Consumed 1min 2.931s CPU time. Dec 11 16:00:05 container-01 systemd[1]: Stopped libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Failed with result 'timeout'. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 346954 (podman) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 346952 (podman) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 346951 (n/a) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 346661 (podman) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 346654 (podman) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 346653 (podman) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 346634 (podman) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 346633 (podman) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 346631 (podman) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 346630 (podman) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 347018 (conmon) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 347017 (dnsmasq) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 346628 (podman) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Killing process 346624 (conmon) with signal SIGKILL. Dec 11 16:00:05 container-01 systemd[1]: libpod-conmon-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope: Stopping timed out. Killing. Dec 11 16:00:05 container-01 dnsmasq[347017]: read /run/containers/cni/dnsname/linkace_default/addnhosts - 1 addresses Dec 11 16:00:05 container-01 dnsmasq[347017]: using only locally-known addresses for dns.podman Dec 11 16:00:05 container-01 dnsmasq[347017]: using nameserver 127.0.0.53#53 Dec 11 16:00:05 container-01 dnsmasq[347017]: reading /etc/resolv.conf Dec 11 16:00:05 container-01 dnsmasq[347017]: using only locally-known addresses for dns.podman Dec 11 16:00:05 container-01 dnsmasq[347017]: compile time options: IPv6 GNU-getopt DBus no-UBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 11 16:00:05 container-01 dnsmasq[347017]: started, version 2.86 cachesize 150 Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(veth94bcbf20) entered forwarding state Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(veth94bcbf20) entered blocking state Dec 11 16:00:05 container-01 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth94bcbf20: link becomes ready Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(veth94bcbf20) entered disabled state Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(veth94bcbf20) entered forwarding state Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(veth94bcbf20) entered blocking state Dec 11 16:00:05 container-01 kernel: device veth94bcbf20 entered promiscuous mode Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(veth94bcbf20) entered disabled state Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(veth94bcbf20) entered blocking state Dec 11 16:00:05 container-01 podman[346628]: 2023-12-11 16:00:05.631278052 +0000 UTC m=+0.390967179 container restart 0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a (image=docker.io/library/mariadb:10.7, name=linkace_db1, org.opencontainers.image.source=https://github.com/MariaDB/mariadb-docker, com.docker.compose.service=db, io.podman.compose.config-hash=bc468bb19fe2660f48063621aee9c32e4f0de4e3fc94faee86d109337d2aa802, org.opencontainers.image.documentation=https://hub.docker.com//mariadb/, org.opencontainers.image.version=10.7.8, com.docker.compose.project.working_dir=/opt/linkace, PODMAN_SYSTEMD_UNIT=podman-compose@linkace.service, com.docker.compose.project.config_files=docker-compose.yml, org.opencontainers.image.base.name=docker.io/library/ubuntu:focal, io.podman.compose.version=1.0.6, org.opencontainers.image.title=MariaDB Database, org.opencontainers.image.url=https://github.com/MariaDB/mariadb-docker, org.opencontainers.image.licenses=GPL-2.0, com.docker.compose.container-number=1, com.docker.compose.project=linkace, org.opencontainers.image.vendor=MariaDB Community, io.podman.compose.project=linkace, org.opencontainers.image.ref.name=ubuntu, org.opencontainers.image.authors=MariaDB Community, org.opencontainers.image.description=MariaDB Database for relational SQL) Dec 11 16:00:05 container-01 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a-userdata-shm.mount: Deactivated successfully. Dec 11 16:00:05 container-01 systemd[1]: run-netns-cni\x2d4b68e18f\x2d211e\x2ddfe6\x2d2e12\x2d429ee4f6fe59.mount: Deactivated successfully. Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(vethc22272fe) entered disabled state Dec 11 16:00:05 container-01 kernel: device vethc22272fe left promiscuous mode Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(vethc22272fe) entered disabled state Dec 11 16:00:05 container-01 systemd[1]: Requested transaction contradicts existing jobs: Transaction for libpod-0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a.scope/start is destructive (machine.slice has 'stop' job queued, but 'start' is included in transaction). Dec 11 16:00:05 container-01 dnsmasq[346622]: read /run/containers/cni/dnsname/linkace_default/addnhosts - 1 addresses Dec 11 16:00:05 container-01 dnsmasq[346622]: using only locally-known addresses for dns.podman Dec 11 16:00:05 container-01 dnsmasq[346622]: using nameserver 127.0.0.53#53 Dec 11 16:00:05 container-01 dnsmasq[346622]: reading /etc/resolv.conf Dec 11 16:00:05 container-01 dnsmasq[346622]: using only locally-known addresses for dns.podman Dec 11 16:00:05 container-01 dnsmasq[346622]: compile time options: IPv6 GNU-getopt DBus no-UBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 11 16:00:05 container-01 dnsmasq[346622]: started, version 2.86 cachesize 150 Dec 11 16:00:05 container-01 systemd[1]: Requested transaction contradicts existing jobs: Resource deadlock avoided Dec 11 16:00:05 container-01 systemd[1]: Requested transaction contradicts existing jobs: Resource deadlock avoided Dec 11 16:00:05 container-01 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc22272fe: link becomes ready Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(vethc22272fe) entered forwarding state Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(vethc22272fe) entered blocking state Dec 11 16:00:05 container-01 kernel: device vethc22272fe entered promiscuous mode Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(vethc22272fe) entered disabled state Dec 11 16:00:05 container-01 kernel: cni-podman8: port 1(vethc22272fe) entered blocking state Dec 11 16:00:05 container-01 podman[346230]: 2023-12-11 16:00:05.10506662 +0000 UTC m=+0.361617651 container restart 0e93a16d7a99c4c850f180c014fa5910eaed3c804c0d06fd9f03556efa73923a (image=docker.io/library/mariadb:10.7, name=linkace_db1, org.opencontainers.image.description=MariaDB Database for relational SQL, org.opencontainers.image.documentation=https://hub.docker.com//mariadb/, org.opencontainers.image.source=https://github.com/MariaDB/mariadb-docker, com.docker.compose.service=db, org.opencontainers.image.version=10.7.8, org.opencontainers.image.title=MariaDB Database, io.podman.compose.config-hash=bc468bb19fe2660f48063621aee9c32e4f0de4e3fc94faee86d109337d2aa802, io.podman.compose.project=linkace, com.docker.compose.project=linkace, org.opencontainers.image.base.name=docker.io/library/ubuntu:focal, com.docker.compose.project.working_dir=/opt/linkace, org.opencontainers.image.vendor=MariaDB Community, com.docker.compose.container-number=1, org.opencontainers.image.authors=MariaDB Community, org.opencontainers.image.url=https://github.com/MariaDB/mariadb-docker, com.docker.compose.project.config_files=docker-compose.yml, org.opencontainers.image.ref.name=ubuntu, org.opencontainers.image.licenses=GPL-2.0, PODMAN_SYSTEMD_UNIT=podman-compose@linkace.service, io.podman.compose.version=1.0.6) Dec 11 16:00:04 container-01 kernel: cni-podman8: port 1(veth17a89846) entered disabled state
Issue Description
I have 2 Jellyfin containers, 1 Nextcloud container, and a SWAG container running in rootful Podman via docker-compose and podman-docker. If I leave these containers running for a while and attempt a reboot, most of the time the
libpod-conmon
scope will delay shutdown by 90 seconds. This does not happen when the containers have only been running for a short while.Steps to reproduce the issue
Steps to reproduce the issue
Describe the results you received
There's a message in the shutdown logs saying
A stop job is running for libpod-conmon-<someid>.scope
and then systemd waits for 90 seconds for it to timeoutDescribe the results you expected
Podmon containers are properly shutdown so that they do not delay system shutdown or reboot.
podman info output
Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
Yes
Additional environment details
Additional environment details
Additional information
Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting