Open eriksjolund opened 1 week ago
Is the pause process alive? Podman should be able to directly join the pause process but if it doesn't exists we do a reexec so if anything I would expect this to be the other way around but I haven't tried reproducing this.
The second time the pause process is alive. Here is fully automatic reproducer:
#!/bin/bash
set -o nounset
username=$1
sudo useradd "$username"
for i in $(seq 1 3); do
echo -e "\nIteration $i"
echo "processes of $username:"
pgrep -u "$username" -l
echo "test podman ps:"
sudo -iu "$username" podman ps
done
Run command
bash reproduce-issue24004.bash test32
(The username test32 does not exist beforehand) The following output is printed
Iteration 1
processes of test32:
test podman ps:
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to log in using a user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1026` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
WARN[0000] Failed to add pause process to systemd sandbox cgroup: dbus: couldn't determine address of session bus
Iteration 2
processes of test32:
18360 catatonit
test podman ps:
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to log in using a user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1026` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to log in using a user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1026` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Iteration 3
processes of test32:
18360 catatonit
test podman ps:
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to log in using a user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1026` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to log in using a user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1026` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
I added --log-level=debug
to the podman ps
command
and run the reproducer script again.
Here is a diff between iteration 1 and iteration 2:
--- /tmp/iter1.txt 2024-09-19 20:35:39
+++ /tmp/iter2.txt 2024-09-19 20:35:45
@@ -1,6 +1,7 @@
-Iteration 1
+Iteration 2
processes of test36:
+32055 catatonit
test podman ps:
INFO[0000] podman filtering at log level debug
DEBU[0000] Called ps.PersistentPreRunE(podman --log-level=debug ps)
@@ -16,16 +17,19 @@
DEBU[0000] Using transient store: false
DEBU[0000] Not configuring container store
DEBU[0000] Initializing event backend journald
-DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
+DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
+DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
-DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
-DEBU[0000] systemd-logind: Unknown object '/'.
-DEBU[0000] Invalid systemd user session for current user
+WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available
+WARN[0000] For using systemd, you may need to log in using a user session
+WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1030` (possibly as root)
+WARN[0000] Falling back to --cgroup-manager=cgroupfs
+INFO[0000] Setting parallel job count to 7
INFO[0000] podman filtering at log level debug
DEBU[0000] Called ps.PersistentPreRunE(podman --log-level=debug ps)
DEBU[0000] Using conmon: "/usr/bin/conmon"
@@ -39,21 +43,20 @@
DEBU[0000] Using volume path /var/home/test36/.local/share/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
-DEBU[0000] overlay: test mount with multiple lowers succeeded
DEBU[0000] Cached value indicated that overlay is supported
-DEBU[0000] overlay: test mount indicated that metacopy is not being used
+DEBU[0000] Cached value indicated that overlay is supported
+DEBU[0000] Cached value indicated that metacopy is not being used
+DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend journald
+DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
-DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
-DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
+DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
-DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
+DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
-DEBU[0000] Initialized SHM lock manager at path /libpod_rootless_lock_1030
-DEBU[0000] Podman detected system restart - performing state refresh
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to log in using a user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1030` (possibly as root)
@@ -63,5 +66,4 @@
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
DEBU[0000] Called ps.PersistentPostRunE(podman --log-level=debug ps)
DEBU[0000] Shutting down engines
-INFO[0000] Received shutdown.Stop(), terminating! PID=32051
-WARN[0000] Failed to add pause process to systemd sandbox cgroup: dbus: couldn't determine address of session bus
+INFO[0000] Received shutdown.Stop(), terminating! PID=32094
this is annoying and it happens the first time you run Podman, but you are also using an invalid configuration since the user has not a valid systemd session.
It was just such an interesting problem so I could not help looking into it. I'll close the issue as it's not important.
it is perhaps worth fixing, as I said it is annoying that the same messages are printed twice :)
We could use an environment variable to signal the re-execed podman that the warning was already printed
Issue Description
This warning message is printed twice
the second time running
sudo -iu test podman ps
Steps to reproduce the issue
Steps to reproduce the issue
Describe the results you received
The error message in step 2 is printed twice in step 3.
Describe the results you expected
No duplication of the error message.
podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
No response
Additional information
No response