containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.28k stars 2.37k forks source link

Insists on crun when runc or containerd is set in /etc/containers/libpod.conf::runtime= #24040

Closed IngwiePhoenix closed 6 days ago

IngwiePhoenix commented 6 days ago

Issue Description

I have installed Podman on my VisionFive2 (RISC-V CPU, JH7110) and am trying to launch a simple container. However, it keeps wanting to use a different container runtime than the one I specified.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Install podman
  2. edit /etc/containers/libpod.conf, set runtime="runc"
  3. podman --log-level=debug run -it alpine -- /bin/sh -il

Describe the results you received

Apologies for the rather ugly output - it is copied as-is.

root@riscboi /e/containers [125]# podman --log-level=debug run -it alpine -- /bin/sh -il
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman --log-level=debug run -it alpine -- /bin/sh -il)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /nvme/var/lib/containers/storage
DEBU[0000] Using run root /nvme/var/run/containers/storage
DEBU[0000] Using static dir /nvme/var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /nvme/var/lib/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is not being used
INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 13
DEBU[0000] Pulling image alpine (policy: missing)
DEBU[0000] Looking up image "alpine" in local containers storage
DEBU[0000] Normalized platform linux/riscv64 to {riscv64 linux  [] }
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/shortnames.conf"
DEBU[0000] Trying "docker.io/riscv64/alpine:latest" ...
DEBU[0000] parsed reference into "[overlay@/nvme/var/lib/containers/storage+/nvme/var/run/containers/storage]@2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03"
DEBU[0000] Found image "alpine" as "docker.io/riscv64/alpine:latest" in local containers storage
DEBU[0000] Found image "alpine" as "docker.io/riscv64/alpine:latest" in local containers storage ([overlay@/nvme/var/lib/containers/storage+/nvme/var/run/containers/storage]@2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03)
DEBU[0000] exporting opaque data as blob "sha256:2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03"
DEBU[0000] Looking up image "docker.io/riscv64/alpine:latest" in local containers storage
DEBU[0000] Normalized platform linux/riscv64 to {riscv64 linux  [] }
DEBU[0000] Trying "docker.io/riscv64/alpine:latest" ...
DEBU[0000] parsed reference into "[overlay@/nvme/var/lib/containers/storage+/nvme/var/run/containers/storage]@2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03"
DEBU[0000] Found image "docker.io/riscv64/alpine:latest" as "docker.io/riscv64/alpine:latest" in local containers storage
DEBU[0000] Found image "docker.io/riscv64/alpine:latest" as "docker.io/riscv64/alpine:latest" in local containers storage ([overlay@/nvme/var/lib/containers/storage+/nvme/var/run/containers/storage]@2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03)
DEBU[0000] exporting opaque data as blob "sha256:2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03"
DEBU[0000] Looking up image "alpine" in local containers storage
DEBU[0000] Normalized platform linux/riscv64 to {riscv64 linux  [] }
DEBU[0000] Trying "docker.io/riscv64/alpine:latest" ...
DEBU[0000] parsed reference into "[overlay@/nvme/var/lib/containers/storage+/nvme/var/run/containers/storage]@2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03"
DEBU[0000] Found image "alpine" as "docker.io/riscv64/alpine:latest" in local containers storage
DEBU[0000] Found image "alpine" as "docker.io/riscv64/alpine:latest" in local containers storage ([overlay@/nvme/var/lib/containers/storage+/nvme/var/run/containers/storage]@2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03)
DEBU[0000] exporting opaque data as blob "sha256:2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03"
DEBU[0000] Inspecting image 2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03
DEBU[0000] exporting opaque data as blob "sha256:2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03"
DEBU[0000] exporting opaque data as blob "sha256:2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03"
DEBU[0000] Inspecting image 2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03
DEBU[0000] Inspecting image 2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03
DEBU[0000] Inspecting image 2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03
DEBU[0000] using systemd mode: false
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Successfully loaded 1 networks
DEBU[0000] Allocated lock 106 for container 32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2
DEBU[0000] exporting opaque data as blob "sha256:2f32f6b11fa159940aadedb7a73a2834f1314ad99e989e38e12dff7ba6575d03"
DEBU[0000] Cached value indicated that idmapped mounts for overlay are supported
DEBU[0000] Created container "32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2"
DEBU[0000] Container "32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2" has work directory "/nvme/var/lib/containers/storage/overlay-containers/32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2/userdata"
DEBU[0000] Container "32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2" has run directory "/nvme/var/run/containers/storage/overlay-containers/32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2/userdata"
DEBU[0000] Handling terminal attach
INFO[0000] Received shutdown.Stop(), terminating!        PID=4786
DEBU[0000] Enabling signal proxying
DEBU[0000] overlay: mount_data=lowerdir=/nvme/var/lib/containers/storage/overlay/l/BRMHI5S3A4AZVTLOAYDOFAGL3E,upperdir=/nvme/var/lib/containers/storage/overlay/5f5fa3c1c79ca44d5d5fdbdb45bf1a8972fdc2665fb2bbf21fb93bc9bb8331d2/diff,workdir=/nvme/var/lib/containers/storage/overlay/5f5fa3c1c79ca44d5d5fdbdb45bf1a8972fdc2665fb2bbf21fb93bc9bb8331d2/work
DEBU[0000] Made network namespace at /run/netns/netns-5fd85381-5546-9437-5a63-e05e66e4c7d6 for container 32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2
DEBU[0000] Mounted container "32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2" at "/nvme/var/lib/containers/storage/overlay/5f5fa3c1c79ca44d5d5fdbdb45bf1a8972fdc2665fb2bbf21fb93bc9bb8331d2/merged"
DEBU[0000] Created root filesystem for container 32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2 at /nvme/var/lib/containers/storage/overlay/5f5fa3c1c79ca44d5d5fdbdb45bf1a8972fdc2665fb2bbf21fb93bc9bb8331d2/merged
[DEBUG netavark::network::validation] "Validating network namespace..."
                                                                       [DEBUG netavark::commands::setup] "Setting up..."
                                                                                                                        [INFO  netavark::firewall] Using iptables firewall driver
                                                                                                                                                                                 [DEBUG netavark::network::bridge] Setup network podman
                      [DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.88.0.6/16]
                                                                                                                       [DEBUG netavark::network::bridge] Bridge name: podman0 with IP addresses [10.88.0.1/16]
                                                                                                                                                                                                              [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
                                                                                    [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/podman0/rp_filter to 2
                                                                                                                                                                                                 [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
                                                                                         [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/arp_notify to 1
                                                                                                                                                                                                    [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/rp_filter to 2
                                                                                             [INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.88.0.1, metric 100)
                                                                                                                                                                                         [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-1D8721804F16F created on table nat
                                                                         [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_2 exists on table filter
                                                                                                                                                                          [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_2 exists on table filter
                                                          [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_3 exists on table filter
                                                                                                                                                           [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_3 exists on table filter
                                           [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_INPUT exists on table filter
                                                                                                                                      [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_INPUT exists on table filter
                [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter
                                                                                                             [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter
                                                                                                                                                                                                          [DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -j ACCEPT created on table nat and chain NETAVARK-1D8721804F16F
                                                                                                                             [DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE created on table nat and chain NETAVARK-1D8721804F16F
                                                     [DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j NETAVARK-1D8721804F16F created on table nat and chain POSTROUTING
                                                                                                                                                                                              [DEBUG netavark::firewall::varktables::helpers] rule -p udp -s 10.88.0.0/16 --dport 53 -j ACCEPT created on table filter and chain NETAVARK_INPUT
                                                                                                                              [DEBUG netavark::firewall::varktables::helpers] rule -m conntrack --ctstate INVALID -j DROP exists on table filter and chain NETAVARK_FORWARD
                                                          [DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT created on table filter and chain NETAVARK_FORWARD
                     [DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j ACCEPT created on table filter and chain NETAVARK_FORWARD
                                                                                                                                                      [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK exists on table nat
                                        [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK exists on table nat
                                                                                                                                           [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ exists on table nat
                          [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ exists on table nat
                                                                                                                          [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT exists on table nat
         [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT exists on table nat
                                                                                                         [DEBUG netavark::firewall::varktables::helpers] rule -j MARK  --set-xmark 0x2000/0x2000 exists on table nat and chain NETAVARK-HOSTPORT-SETMARK
                                       [DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 exists on table nat and chain NETAVARK-HOSTPORT-MASQ
                           [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL exists on table nat and chain PREROUTING
                                                                                                                                                                               [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL exists on table nat and chain OUTPUT
                                                                                                              [DEBUG netavark::commands::setup] {
                                                                                                                                                         "podman": StatusBlock {
                                                                                                                                                                                            dns_search_domains: Some(
                    [],
                                   ),
                                                 dns_server_ips: Some(
                                                                                      [],
                                                                                                     ),
                                                                                                                   interfaces: Some(
                                                                                                                                                    {
                                                                                                                                                                         "eth0": NetInterface {

      mac_address: "82:df:3c:c2:44:29",
                                                               subnets: Some(
                                                                                                         [
                                                                                                                                          NetAddress {
                                                                                                                                                                                          gateway: Some(

                               10.88.0.1,
                                                                             ),
                                                                                                                   ipnet: 10.88.0.6/16,
                                                                                                                                                                       },
                                                                                                                                                                                                     ],

              ),
                                    },
                                                      },
                                                                    ),
                                                                              },
                                                                                    }
                                                                                     [DEBUG netavark::commands::setup] "Setup complete"
                                                                                                                                       DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription
DEBU[0000] Setting Cgroups for container 32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2 to machine.slice:libpod:32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] Workdir "/" resolved to host path "/nvme/var/lib/containers/storage/overlay/5f5fa3c1c79ca44d5d5fdbdb45bf1a8972fdc2665fb2bbf21fb93bc9bb8331d2/merged"
DEBU[0000] Created OCI spec for container 32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2 at /nvme/var/lib/containers/storage/overlay-containers/32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2 -u 32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2 -r /usr/bin/crun -b /nvme/var/lib/containers/storage/overlay-containers/32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2/userdata -p /nvme/var/run/containers/storage/overlay-containers/32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2/userdata/pidfile -n heuristic_mirzakhani --exit-dir /run/libpod/exits --persist-dir /run/libpod/persist/32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2 --full-attach -s -l journald --log-level debug --syslog -t --conmon-pidfile /nvme/var/run/containers/storage/overlay-containers/32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /nvme/var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /nvme/var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /nvme/var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2]"
INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2.scope
DEBU[0000] Cleaning up container 32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2
DEBU[0000] Tearing down network namespace at /run/netns/netns-5fd85381-5546-9437-5a63-e05e66e4c7d6 for container 32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2
[DEBUG netavark::commands::teardown] "Tearing down.."
                                                     [INFO  netavark::firewall] Using iptables firewall driver
                                                                                                              [INFO  netavark::network::bridge] removing bridge podman0
                                                                                                                                                                       [DEBUG netavark::commands::teardown] "Teardown complete"
              DEBU[0001] Unmounted container "32e4dc00a1cd57df4480cffbbd479f9be4ed211d5284090e3adb6c93e4f0f9a2"
DEBU[0001] ExitCode msg: "container create failed (no logs from conmon): conmon bytes \"\": readobjectstart: expect { or n, but found \x00, error found in #0 byte of ...||..., bigger context ...||..."
Error: container create failed (no logs from conmon): conmon bytes "": readObjectStart: expect { or n, but found , error found in #0 byte of ...||..., bigger context ...||...
DEBU[0001] Shutting down engines

Describe the results you expected

I exlected to see the container pulled and a shell opened.

podman info output

host:
  arch: riscv64
  buildahVersion: 1.37.2
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.10+ds1-1+b1_riscv64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: unknown'
  cpuUtilization:
    idlePercent: 95.62
    systemPercent: 0.69
    userPercent: 3.69
  cpus: 4
  databaseBackend: sqlite
  distribution:
    codename: trixie
    distribution: debian
    version: unknown
  eventLogger: journald
  freeLocks: 1941
  hostname: riscboi
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.6.20+
  linkmode: dynamic
  logDriver: journald
  memFree: 5463130112
  memTotal: 8271396864
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns_1.12.2-1_riscv64
      path: /usr/lib/podman/aardvark-dns
      version: aardvark-dns 1.12.2
    package: netavark_1.12.1-3_riscv64
    path: /usr/lib/podman/netavark
    version: netavark 1.12.1
  ociRuntime:
    name: crun
    package: crun_1.17-1_riscv64
    path: /usr/bin/crun
    version: |-
      crun version 1.17
      commit: 000fa0d4eeed8938301f3bcf8206405315bc1017
      rundir: /run/user/0/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt_0.0~git20240906.6b38f07-1_riscv64
    version: |
      pasta 0.0~git20240906.6b38f07-1
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: false
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.1-1+b2_riscv64
    version: |-
      slirp4netns version 1.2.1
      commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
      libslirp: 4.8.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 21474832384
  swapTotal: 21474832384
  uptime: 1h 34m 13.00s (Approximately 0.04 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 103
    paused: 0
    running: 0
    stopped: 103
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /nvme/var/lib/containers/storage
  graphRootAllocated: 983349346304
  graphRootUsed: 160712163328
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 91
  runRoot: /nvme/var/run/containers/storage
  transientStore: false
  volumePath: /nvme/var/lib/containers/storage/volumes
version:
  APIVersion: 5.2.2
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.22.7
  Os: linux
  OsArch: linux/riscv64
  Version: 5.2.2

### Podman in a container

No

### Privileged Or Rootless

Privileged

### Upstream Latest Release

Yes

### Additional environment details

root@riscboi /e/containers# cat libpod.conf |grep -i runtime | grep -v "#" runtime="runc" runtime_supports_json = ["crun", "runc"] runtime_supports_nocgroups = ["crun"] [runtimes]


### Additional information

root@riscboi /e/containers# cat /etc/os-release PRETTY_NAME="Debian GNU/Linux trixie/sid" NAME="Debian GNU/Linux" VERSION_CODENAME=trixie ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"



Admittedly, this is a super hotchpotch of unstable stuff - but, that is the RISC-V nature these days. It is getting better bit by bit.

Further, here is my kernel config: 
[vf2-config.txt](https://github.com/user-attachments/files/17096027/vf2-config.txt)

Thank you and kind regards!
Luap99 commented 6 days ago

libpod.conf hasn't been a valid config file for years at this point you need to use containers.conf