containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.27k stars 2.37k forks source link

Warning message `WARN[0000] The cgroupv2 manager is set to systemd but ...` is printed twice #24004

Open eriksjolund opened 1 week ago

eriksjolund commented 1 week ago

Issue Description

This warning message is printed twice

WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
WARN[0000] For using systemd, you may need to log in using a user session 
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1008` (possibly as root) 
WARN[0000] Falling back to --cgroup-manager=cgroupfs   

the second time running sudo -iu test podman ps

Steps to reproduce the issue

Steps to reproduce the issue

  1. Create user
    $ sudo useradd test
  2. Run command
    $ sudo -iu test podman ps
    WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
    WARN[0000] For using systemd, you may need to log in using a user session 
    WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1008` (possibly as root) 
    WARN[0000] Falling back to --cgroup-manager=cgroupfs    
    CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
  3. Run the same command again
    $ sudo -iu test podman ps
    WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
    WARN[0000] For using systemd, you may need to log in using a user session 
    WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1008` (possibly as root) 
    WARN[0000] Falling back to --cgroup-manager=cgroupfs    
    WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
    WARN[0000] For using systemd, you may need to log in using a user session 
    WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1008` (possibly as root) 
    WARN[0000] Falling back to --cgroup-manager=cgroupfs    
    CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

Describe the results you received

The error message in step 2 is printed twice in step 3.

Describe the results you expected

No duplication of the error message.

podman info output

host:
  arch: arm64
  buildahVersion: 1.37.2
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.10-1.fc40.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: '
  cpuUtilization:
    idlePercent: 96.52
    systemPercent: 2.44
    userPercent: 1.04
  cpus: 2
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: coreos
    version: "40"
  eventLogger: journald
  freeLocks: 2046
  hostname: fcos-next5
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
  kernel: 6.10.7-200.fc40.aarch64
  linkmode: dynamic
  logDriver: journald
  memFree: 3392077824
  memTotal: 4082122752
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.12.1-1.fc40.aarch64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.12.1
    package: netavark-1.12.2-1.fc40.aarch64
    path: /usr/libexec/podman/netavark
    version: netavark 1.12.2
  ociRuntime:
    name: crun
    package: crun-1.15-1.fc40.aarch64
    path: /usr/bin/crun
    version: |-
      crun version 1.15
      commit: e6eacaf4034e84185fd8780ac9262bbf57082278
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20240821.g1d6142f-1.fc40.aarch64
    version: |
      pasta 0^20240821.g1d6142f-1.fc40.aarch64-pasta
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: false
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-2.fc40.aarch64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 0
  swapTotal: 0
  uptime: 0h 2m 46.00s
  variant: v8
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 0
    stopped: 2
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphRootAllocated: 26238496768
  graphRootUsed: 23703236608
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 6
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 5.2.2
  Built: 1724198400
  BuiltTime: Wed Aug 21 00:00:00 2024
  GitCommit: ""
  GoVersion: go1.22.6
  Os: linux
  OsArch: linux/arm64
  Version: 5.2.2

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

No response

Additional information

No response

Luap99 commented 1 week ago

Is the pause process alive? Podman should be able to directly join the pause process but if it doesn't exists we do a reexec so if anything I would expect this to be the other way around but I haven't tried reproducing this.

eriksjolund commented 1 week ago

The second time the pause process is alive. Here is fully automatic reproducer:

  1. Create file reproduce-issue24004.bash
    #!/bin/bash
    set -o nounset
    username=$1
    sudo useradd "$username"
    for i in $(seq 1 3); do
     echo -e "\nIteration $i"
     echo "processes of $username:"
     pgrep -u "$username" -l
     echo "test podman ps:"
     sudo -iu "$username" podman ps
    done
  2. Run command

    bash reproduce-issue24004.bash test32

    (The username test32 does not exist beforehand) The following output is printed

    
    Iteration 1
    processes of test32:
    test podman ps:
    WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
    WARN[0000] For using systemd, you may need to log in using a user session 
    WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1026` (possibly as root) 
    WARN[0000] Falling back to --cgroup-manager=cgroupfs    
    CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
    WARN[0000] Failed to add pause process to systemd sandbox cgroup: dbus: couldn't determine address of session bus 
    
    Iteration 2
    processes of test32:
    18360 catatonit
    test podman ps:
    WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
    WARN[0000] For using systemd, you may need to log in using a user session 
    WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1026` (possibly as root) 
    WARN[0000] Falling back to --cgroup-manager=cgroupfs    
    WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
    WARN[0000] For using systemd, you may need to log in using a user session 
    WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1026` (possibly as root) 
    WARN[0000] Falling back to --cgroup-manager=cgroupfs    
    CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
    
    Iteration 3
    processes of test32:
    18360 catatonit
    test podman ps:
    WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
    WARN[0000] For using systemd, you may need to log in using a user session 
    WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1026` (possibly as root) 
    WARN[0000] Falling back to --cgroup-manager=cgroupfs    
    WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
    WARN[0000] For using systemd, you may need to log in using a user session 
    WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1026` (possibly as root) 
    WARN[0000] Falling back to --cgroup-manager=cgroupfs    
    CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
eriksjolund commented 1 week ago

I added --log-level=debug to the podman ps command and run the reproducer script again.

Here is a diff between iteration 1 and iteration 2:

--- /tmp/iter1.txt  2024-09-19 20:35:39
+++ /tmp/iter2.txt  2024-09-19 20:35:45
@@ -1,6 +1,7 @@

-Iteration 1
+Iteration 2
 processes of test36:
+32055 catatonit
 test podman ps:
 INFO[0000] podman filtering at log level debug          
 DEBU[0000] Called ps.PersistentPreRunE(podman --log-level=debug ps) 
@@ -16,16 +17,19 @@
 DEBU[0000] Using transient store: false                 
 DEBU[0000] Not configuring container store              
 DEBU[0000] Initializing event backend journald          
-DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument 
+DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
 DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument 
 DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument 
+DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument 
 DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument 
-DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
 DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
 DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument 
 DEBU[0000] Using OCI runtime "/usr/bin/crun"            
-DEBU[0000] systemd-logind: Unknown object '/'.          
-DEBU[0000] Invalid systemd user session for current user 
+WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
+WARN[0000] For using systemd, you may need to log in using a user session 
+WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1030` (possibly as root) 
+WARN[0000] Falling back to --cgroup-manager=cgroupfs    
+INFO[0000] Setting parallel job count to 7              
 INFO[0000] podman filtering at log level debug          
 DEBU[0000] Called ps.PersistentPreRunE(podman --log-level=debug ps) 
 DEBU[0000] Using conmon: "/usr/bin/conmon"              
@@ -39,21 +43,20 @@
 DEBU[0000] Using volume path /var/home/test36/.local/share/containers/storage/volumes 
 DEBU[0000] Using transient store: false                 
 DEBU[0000] [graphdriver] trying provided driver "overlay" 
-DEBU[0000] overlay: test mount with multiple lowers succeeded 
 DEBU[0000] Cached value indicated that overlay is supported 
-DEBU[0000] overlay: test mount indicated that metacopy is not being used 
+DEBU[0000] Cached value indicated that overlay is supported 
+DEBU[0000] Cached value indicated that metacopy is not being used 
+DEBU[0000] Cached value indicated that native-diff is usable 
 DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
 DEBU[0000] Initializing event backend journald          
+DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument 
 DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument 
 DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
-DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
-DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument 
+DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument 
 DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument 
 DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument 
-DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument 
+DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
 DEBU[0000] Using OCI runtime "/usr/bin/crun"            
-DEBU[0000] Initialized SHM lock manager at path /libpod_rootless_lock_1030 
-DEBU[0000] Podman detected system restart - performing state refresh 
 WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
 WARN[0000] For using systemd, you may need to log in using a user session 
 WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1030` (possibly as root) 
@@ -63,5 +66,4 @@
 CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
 DEBU[0000] Called ps.PersistentPostRunE(podman --log-level=debug ps) 
 DEBU[0000] Shutting down engines                        
-INFO[0000] Received shutdown.Stop(), terminating!        PID=32051
-WARN[0000] Failed to add pause process to systemd sandbox cgroup: dbus: couldn't determine address of session bus 
+INFO[0000] Received shutdown.Stop(), terminating!        PID=32094
giuseppe commented 1 week ago

this is annoying and it happens the first time you run Podman, but you are also using an invalid configuration since the user has not a valid systemd session.

eriksjolund commented 1 week ago

It was just such an interesting problem so I could not help looking into it. I'll close the issue as it's not important.

giuseppe commented 1 week ago

it is perhaps worth fixing, as I said it is annoying that the same messages are printed twice :)

We could use an environment variable to signal the re-execed podman that the warning was already printed