containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.86k stars 2.42k forks source link

podman v1.6.4 on CentOS 8.2 looses track of container states after running for days and weeks #8162

Closed jotelha closed 3 years ago

jotelha commented 4 years ago

/kind bug

NOTE: This issue arises on a system we use in production, thus I am not in a position to arbitrarily upgrade to recent versions for testing. If this issue has been resolved elsewhere already, please just point to that fix and close this issue again. Thanks.

Description

From time to time, podman looses track of the running containers' states.

Amongst others, we use the following mongod pod https://github.com/IMTEK-Simulation/mongod-on-smb, but the issue arises independently of the actual container composition.

Steps to reproduce the issue:

  1. build and launch pod on rootless podman (in the output sample below the above mentioned mongod services)

  2. let it run for days and weeks

  3. come back to check state, look at logs, enter interactive shell on running container, ...

Describe the results you received:

In the output below, the user name is fireworks.

podman now lists the services within the mentioned pod as "created", not as "running",

$ podman-compose ps
using podman version: podman version 1.6.4
podman ps -a --filter label=io.podman.compose.project=mongod-on-smb
CONTAINER ID  IMAGE                            COMMAND               CREATED      STATUS   PORTS                                               NAMES
646a861a3cf8  localhost/mongo-express:latest   tini -- npm start     4 weeks ago  Created  127.0.0.1:8081->8081/tcp, 0.0.0.0:27017->27017/tcp  mongo-express
d6ffa9308249  localhost/mongodb-backup:latest  cron -f               4 weeks ago  Created  127.0.0.1:8081->8081/tcp, 0.0.0.0:27017->27017/tcp  mongodb-backup
1d32fd335be6  localhost/mongod-on-smb:latest   --config /etc/mon...  4 weeks ago  Created  127.0.0.1:8081->8081/tcp, 0.0.0.0:27017->27017/tcp  mongodb
0

and complains like this

$ podman exec -it mongodb bash
Error: cannot exec into container that is not running: container state improper

if trying to enter the container or with

$podman restart mongodb
Error: some dependencies of container 1d32fd335be6d04f21928e8e523345892fdcfb2e0c42c6c14a05d9f29f137ef8 are not started: 56501c98a60af26831a51876ecc5230653262ad492e2caf01d91e3ec026de57f: container state improper

when trying to restart, or with

$podman start mongodb
ERRO[0000] error starting some container dependencies   
ERRO[0000] "error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]" 
Error: unable to start container "mongodb": error starting some containers: internal libpod error

when trying to start. Latter error actually arises for occupied ports, as the mongo port has not been released

$lsof -iTCP -sTCP:LISTEN
COMMAND     PID      USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
slirp4net 16044 fireworks    8u  IPv4 150791086      0t0  TCP localhost:tproxy (LISTEN)
slirp4net 16044 fireworks    9u  IPv4 150791089      0t0  TCP *:27017 (LISTEN)

and in fact (all?) processes that are supposed to be running within the container are still alive

$ ps -u fireworks -f
UID        PID  PPID  C STIME TTY          TIME CMD
firewor+  2308  2307  0 17:14 pts/1    00:00:00 bash
firewor+  2360     1  0 17:14 ?        00:00:00 podman
firewor+  3719  2308  0 17:37 pts/1    00:00:00 ps -u fireworks -f
firewor+  3720  2308  0 17:37 pts/1    00:00:00 less
firewor+ 15972     1  0 Aug23 ?        00:00:00 podman
firewor+ 16041     1  0 Aug23 ?        00:00:00 /bin/fuse-overlayfs -o lowerdir=/home/fireworks/.local/share/containers/storage/overlay/l/KUCDT7YAWZF75IFFTDQZ43IDOP,upperdir=/home/fireworks/.local/share/containers/storage/overlay/424919a2b6de4bcac915e1b5ad7bfa0fa632eaf3a87e1c58555edb08809976c9/diff,workdir=/home/fireworks/.local/share/containers/storage/overlay/424919a2b6de4bcac915e1b5ad7bfa0fa632eaf3a87e1c58555edb08809976c9/work,context="system_u:object_r:container_file_t:s0:c443,c959" /home/fireworks/.local/share/containers/storage/overlay/424919a2b6de4bcac915e1b5ad7bfa0fa632eaf3a87e1c58555edb08809976c9/merged
firewor+ 16044     1  1 Aug23 ?        13:07:13 /bin/slirp4netns --api-socket /tmp/run-1003/libpod/tmp/56501c98a60af26831a51876ecc5230653262ad492e2caf01d91e3ec026de57f.net --disable-host-loopback --mtu 65520 --enable-sandbox -c -e 3 -r 4 --netns-type=path /tmp/run-1003/netns/cni-543785de-d31a-30ed-1b03-3c64f6777221 tap0
firewor+ 16050     1  0 Aug23 ?        00:00:00 /usr/bin/conmon --api-version 1 -c 56501c98a60af26831a51876ecc5230653262ad492e2caf01d91e3ec026de57f -u 56501c98a60af26831a51876ecc5230653262ad492e2caf01d91e3ec026de57f -r /usr/bin/runc -b /home/fireworks/.local/share/containers/storage/overlay-containers/56501c98a60af26831a51876ecc5230653262ad492e2caf01d91e3ec026de57f/userdata -p /tmp/run-1003/overlay-containers/56501c98a60af26831a51876ecc5230653262ad492e2caf01d91e3ec026de57f/userdata/pidfile -l k8s-file:/home/fireworks/.local/share/containers/storage/overlay-containers/56501c98a60af26831a51876ecc5230653262ad492e2caf01d91e3ec026de57f/userdata/ctr.log --exit-dir /tmp/run-1003/libpod/tmp/exits --socket-dir-path /tmp/run-1003/libpod/tmp/socket --log-level error --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/tmp/run-1003/overlay-containers/56501c98a60af26831a51876ecc5230653262ad492e2caf01d91e3ec026de57f/userdata/oci-log --conmon-pidfile /tmp/run-1003/overlay-containers/56501c98a60af26831a51876ecc5230653262ad492e2caf01d91e3ec026de57f/userdata/conmon.pid
firewor+ 16060 16050  0 Aug23 ?        00:00:00 /pause
firewor+ 16073     1  0 Aug23 ?        00:00:01 /bin/fuse-overlayfs -o lowerdir=/home/fireworks/.local/share/containers/storage/overlay/l/SY72E66VDUTSY6U323RFIL6ZCC:/home/fireworks/.local/share/containers/storage/overlay/l/BC6F3VS7PZDAEOPJQ6LDCDCT5F:/home/fireworks/.local/share/containers/storage/overlay/l/AU2W4YZEAD2ESPLN6PQHMNSQVD:/home/fireworks/.local/share/containers/storage/overlay/l/4BHFI5KSDXRXJZO7GFMIKLYMZ6:/home/fireworks/.local/share/containers/storage/overlay/l/XHFJXRNX6WF24ZXTJY5MJNDWCN,upperdir=/home/fireworks/.local/share/containers/storage/overlay/b29ad401593df4c760ceabd0802b3fcde70e1620f2c9e5948cc128bfc809c321/diff,workdir=/home/fireworks/.local/share/containers/storage/overlay/b29ad401593df4c760ceabd0802b3fcde70e1620f2c9e5948cc128bfc809c321/work,context="system_u:object_r:container_file_t:s0:c257,c802" /home/fireworks/.local/share/containers/storage/overlay/b29ad401593df4c760ceabd0802b3fcde70e1620f2c9e5948cc128bfc809c321/merged
firewor+ 16154     1  0 Aug23 ?        00:00:16 /bin/fuse-overlayfs -o lowerdir=/home/fireworks/.local/share/containers/storage/overlay/l/O5PXPMOA2XOVO5YIYLOSL673PX:/home/fireworks/.local/share/containers/storage/overlay/l/SY72E66VDUTSY6U323RFIL6ZCC:/home/fireworks/.local/share/containers/storage/overlay/l/BC6F3VS7PZDAEOPJQ6LDCDCT5F:/home/fireworks/.local/share/containers/storage/overlay/l/AU2W4YZEAD2ESPLN6PQHMNSQVD:/home/fireworks/.local/share/containers/storage/overlay/l/4BHFI5KSDXRXJZO7GFMIKLYMZ6:/home/fireworks/.local/share/containers/storage/overlay/l/XHFJXRNX6WF24ZXTJY5MJNDWCN,upperdir=/home/fireworks/.local/share/containers/storage/overlay/527a26c40068a5aae4292d2555898b46eb4d77f66a80e1e8a0250981d8c18f6d/diff,workdir=/home/fireworks/.local/share/containers/storage/overlay/527a26c40068a5aae4292d2555898b46eb4d77f66a80e1e8a0250981d8c18f6d/work,context="system_u:object_r:container_file_t:s0:c0,c936" /home/fireworks/.local/share/containers/storage/overlay/527a26c40068a5aae4292d2555898b46eb4d77f66a80e1e8a0250981d8c18f6d/merged
firewor+ 16158     1  0 Aug23 ?        00:00:00 /usr/bin/conmon --api-version 1 -c d6ffa9308249d196c93700beb6f4591e8eefcdc0e58c7724b6ad13bd121bc799 -u d6ffa9308249d196c93700beb6f4591e8eefcdc0e58c7724b6ad13bd121bc799 -r /usr/bin/runc -b /home/fireworks/.local/share/containers/storage/overlay-containers/d6ffa9308249d196c93700beb6f4591e8eefcdc0e58c7724b6ad13bd121bc799/userdata -p /tmp/run-1003/overlay-containers/d6ffa9308249d196c93700beb6f4591e8eefcdc0e58c7724b6ad13bd121bc799/userdata/pidfile -l k8s-file:/home/fireworks/.local/share/containers/storage/overlay-containers/d6ffa9308249d196c93700beb6f4591e8eefcdc0e58c7724b6ad13bd121bc799/userdata/ctr.log --exit-dir /tmp/run-1003/libpod/tmp/exits --socket-dir-path /tmp/run-1003/libpod/tmp/socket --log-level error --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/tmp/run-1003/overlay-containers/d6ffa9308249d196c93700beb6f4591e8eefcdc0e58c7724b6ad13bd121bc799/userdata/oci-log --conmon-pidfile /tmp/run-1003/overlay-containers/d6ffa9308249d196c93700beb6f4591e8eefcdc0e58c7724b6ad13bd121bc799/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/fireworks/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/run-1003 --exit-command-arg --log-level --exit-command-arg error --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/run-1003/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg d6ffa9308249d196c93700beb6f4591e8eefcdc0e58c7724b6ad13bd121bc799
firewor+ 16168 16158  0 Aug23 ?        00:00:47 /tini -- docker-entrypoint.sh cron -f
firewor+ 16181 16168  0 Aug23 ?        00:00:00 bash /usr/local/bin/docker-entrypoint.sh cron -f
firewor+ 16188 16181  0 Aug23 ?        00:00:03 cron -f
firewor+ 16211     1  0 Aug23 ?        00:00:47 /bin/fuse-overlayfs -o lowerdir=/home/fireworks/.local/share/containers/storage/overlay/l/5HJNLY3CQGG2NWKLCHAPXFH7AP:/home/fireworks/.local/share/containers/storage/overlay/l/VOKZUSCKWECWIP22BPFBQ7HANV:/home/fireworks/.local/share/containers/storage/overlay/l/KTCLXTPGT47W7XJJO7JO54V62V:/home/fireworks/.local/share/containers/storage/overlay/l/QUI3NMG7OSWMRBZLDRPB34WQEA:/home/fireworks/.local/share/containers/storage/overlay/l/DUJU6MEEQS2ZBC7ZS5JCTSKAVH:/home/fireworks/.local/share/containers/storage/overlay/l/4ZLW3ZRVXC54TUAZCNOVEK24QN,upperdir=/home/fireworks/.local/share/containers/storage/overlay/95ec6e141eb0628f0e6128d5039f9bc1a1cfe5cb8f9ef7646361ca86a34353c9/diff,workdir=/home/fireworks/.local/share/containers/storage/overlay/95ec6e141eb0628f0e6128d5039f9bc1a1cfe5cb8f9ef7646361ca86a34353c9/work,context="system_u:object_r:container_file_t:s0:c98,c545" /home/fireworks/.local/share/containers/storage/overlay/95ec6e141eb0628f0e6128d5039f9bc1a1cfe5cb8f9ef7646361ca86a34353c9/merged
firewor+ 18226     1  0 Aug23 ?        00:00:00 /usr/bin/conmon --api-version 1 -c 646a861a3cf811b56ddb7fe23d472c45025e0951ff4204d9d0ae38b4003d1d98 -u 646a861a3cf811b56ddb7fe23d472c45025e0951ff4204d9d0ae38b4003d1d98 -r /usr/bin/runc -b /home/fireworks/.local/share/containers/storage/overlay-containers/646a861a3cf811b56ddb7fe23d472c45025e0951ff4204d9d0ae38b4003d1d98/userdata -p /tmp/run-1003/overlay-containers/646a861a3cf811b56ddb7fe23d472c45025e0951ff4204d9d0ae38b4003d1d98/userdata/pidfile -l k8s-file:/home/fireworks/.local/share/containers/storage/overlay-containers/646a861a3cf811b56ddb7fe23d472c45025e0951ff4204d9d0ae38b4003d1d98/userdata/ctr.log --exit-dir /tmp/run-1003/libpod/tmp/exits --socket-dir-path /tmp/run-1003/libpod/tmp/socket --log-level error --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/tmp/run-1003/overlay-containers/646a861a3cf811b56ddb7fe23d472c45025e0951ff4204d9d0ae38b4003d1d98/userdata/oci-log --conmon-pidfile /tmp/run-1003/overlay-containers/646a861a3cf811b56ddb7fe23d472c45025e0951ff4204d9d0ae38b4003d1d98/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/fireworks/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/run-1003 --exit-command-arg --log-level --exit-command-arg error --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/run-1003/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 646a861a3cf811b56ddb7fe23d472c45025e0951ff4204d9d0ae38b4003d1d98
firewor+ 18237 18226  0 Aug23 ?        00:00:48 tini -- npm start
firewor+ 18252 18237  0 Aug23 ?        00:00:00 npm
firewor+ 18264 18252  0 Aug23 ?        00:00:00 sh -c cross-env NODE_ENV=production node ap

The pod can only be restarted after cleaning up manually, i.e.

$podman-compose down
using podman version: podman version 1.6.4
podman stop -t 10 mongodb
Error: can only stop created or running containers. 1d32fd335be6d04f21928e8e523345892fdcfb2e0c42c6c14a05d9f29f137ef8 is in state configured: container state improper
125
podman stop -t 10 mongodb-backup
Error: can only stop created or running containers. d6ffa9308249d196c93700beb6f4591e8eefcdc0e58c7724b6ad13bd121bc799 is in state configured: container state improper
125
podman stop -t 10 mongo-express
Error: can only stop created or running containers. 646a861a3cf811b56ddb7fe23d472c45025e0951ff4204d9d0ae38b4003d1d98 is in state configured: container state improper
125
podman rm mongodb
1d32fd335be6d04f21928e8e523345892fdcfb2e0c42c6c14a05d9f29f137ef8
0
podman rm mongodb-backup
d6ffa9308249d196c93700beb6f4591e8eefcdc0e58c7724b6ad13bd121bc799
0
podman rm mongo-express
646a861a3cf811b56ddb7fe23d472c45025e0951ff4204d9d0ae38b4003d1d98
0
podman pod rm mongod-on-smb
de71a42eb088784b15207c4a10dedad92e86daa784869264b7efaa80a84734cd
0

and subsequent

$ pkill -u fireworks
$ podman-compose up -d

Describe the results you expected:

Correct tracking of container states, i.e. containers still marked as running and accessible via podman exec ....

Additional information you deem important (e.g. issue happens only occasionally):

CentOS version:

$ cat /etc/centos-release
CentOS Linux release 8.2.2004 (Core)

The containers are started with podman-compose installed within a minimal venv

$ pip list
Package        Version
-------------- ----------
pip            20.1.1
podman-compose 0.1.7.dev0
PyYAML         5.3.1
setuptools     39.2.0

with a slightly adapted podman-compose that allows timeouts > 1 s when shutting down containers (otherwise equivalent with upstream podman-compose, see https://github.com/containers/podman-compose/compare/devel...jotelha:20200524_down_timeout).

$ podman-compose version
using podman version: podman version 1.6.4
podman-composer version  0.1.7dev
podman --version
podman version 1.6.4

This, however, is unlikely related to the issue.

Output of podman version:

$ podman version
Version:            1.6.4
RemoteAPI Version:  1
Go Version:         go1.13.4
OS/Arch:            linux/amd64

Output of podman info --debug:

$ podman info --debug
debug:
  compiler: gc
  git commit: ""
  go version: go1.13.4
  podman version: 1.6.4
host:
  BuildahVersion: 1.12.0-dev
  CgroupVersion: v1
  Conmon:
    package: conmon-2.0.6-1.module_el8.2.0+305+5e198a41.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.6, commit: a2b11288060ebd7abd20e0b4eb1a834bbf0aec3e'
  Distribution:
    distribution: '"centos"'
    version: "8"
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 1003
      size: 1
    - container_id: 1
      host_id: 296608
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1003
      size: 1
    - container_id: 1
      host_id: 296608
      size: 65536
  MemFree: 1190711296
  MemTotal: 8189861888
  OCIRuntime:
    name: runc
    package: runc-1.0.0-65.rc10.module_el8.2.0+305+5e198a41.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 5915729920
  SwapTotal: 8497655808
  arch: amd64
  cpus: 2
  eventlogger: file
  hostname: simdata.vm.uni-freiburg.de
  kernel: 4.18.0-193.19.1.el8_2.x86_64
  os: linux
  rootless: true
  slirp4netns:
    Executable: /bin/slirp4netns
    Package: slirp4netns-0.4.2-3.git21fdece.module_el8.2.0+305+5e198a41.x86_64
    Version: |-
      slirp4netns version 0.4.2+dev
      commit: 21fdece2737dc24ffa3f01a341b8a6854f8b13b4
  uptime: 553h 35m 54.12s (Approximately 23.04 days)
registries:
  blocked: null
  insecure: null
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  ConfigFile: /home/fireworks/.config/containers/storage.conf
  ContainerStore:
    number: 4
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /bin/fuse-overlayfs
      Package: fuse-overlayfs-0.7.2-5.module_el8.2.0+305+5e198a41.x86_64
      Version: |-
        fuse-overlayfs: version 0.7.2
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  GraphRoot: /home/fireworks/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 23
  RunRoot: /tmp/run-1003
  VolumePath: /home/fireworks/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

$ rpm -q podman
podman-1.6.4-10.module_el8.2.0+305+5e198a41.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

No and Yes.

The issue arises on a system we use in production, thus I am not in a position to arbitrarily upgrade to recent versions for testing. In addition, the issue only arises after days or weeks.

I did not find anything related on https://github.com/containers/podman/blob/master/troubleshooting.md.

Additional environment details (AWS, VirtualBox, physical, etc.):

In our setup, CentOS and podman run on a virtual machine provided by the University of Freiburg's Rechenzentrum.

giuseppe commented 4 years ago

that happens because files on /tmp are cleaned up by systemd-tmpfiles if they are older than a week (unless you've configured it differently).

Please make sure the run root is on /run/user or on a path not handled by systemd-tmpfiles.

jotelha commented 4 years ago

Thanks! That issue's source was easier identified than i expected. However, shouldn't something on this be on the https://github.com/containers/podman/blob/master/troubleshooting.md troubleshooting guide (or have podman warn about that behavior in case it falls back to the /tmp directory) ?

That behavior was particularly confusing, as we have several users on the machine running containers. Some never encountered that issue while others did, and with your hint we figured out that those who did never encounter the issue had their runroot set to /run/user/$UID, while those who ran into the issue had it set to /tmp/run-$UID, and those directories had been written hard into the $HOME/.config/containers/storage.conf files without our interference. The discussion at https://github.com/containers/podman/issues/3274 helped my understanding.

I believe the reason for the intransparently distinct behavior for different users is that some users had been created without password an never logged in to directly. Instead some other user would become them via sudo su ..., and as a consequence the /run/user/$UID directory would never be created (see https://www.freedesktop.org/software/systemd/man/pam_systemd.html, item 1).

I am now trying to have podman use /run/user/$UID for all users, which I am quite struggling with. First, I begin with sudo loginctl enable-linger USERNAME (https://www.freedesktop.org/software/systemd/man/loginctl.html#enable-linger%20USER%E2%80%A6) to have the /run/user/$UID' directory available reliably. Next, I would manually modify therunrootentry within.config/containers/storage.confto match that directory, e.g.runroot = "/run/user/1003". Still, that won't make podman change it's mind about therunroot`, and neither does

export XDG_RUNTIME_DIR=/run/user/$UID

which is referred to as a default according to https://github.com/containers/podman/blob/master/docs/tutorials/rootless_tutorial.md#storageconf. The podman info output always points to /tmp/run-1003, no matter what:

$ podman info
host:
  BuildahVersion: 1.12.0-dev
  CgroupVersion: v1
  Conmon:
    package: conmon-2.0.6-1.module_el8.2.0+305+5e198a41.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.6, commit: a2b11288060ebd7abd20e0b4eb1a834bbf0aec3e'
  Distribution:
    distribution: '"centos"'
    version: "8"
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 1003
      size: 1
    - container_id: 1
      host_id: 296608
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1003
      size: 1
    - container_id: 1
      host_id: 296608
      size: 65536
  MemFree: 2808565760
  MemTotal: 8189861888
  OCIRuntime:
    name: runc
    package: runc-1.0.0-65.rc10.module_el8.2.0+305+5e198a41.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 3246157824
  SwapTotal: 8497655808
  arch: amd64
  cpus: 2
  eventlogger: journald
  hostname: simdata.vm.uni-freiburg.de
  kernel: 4.18.0-193.19.1.el8_2.x86_64
  os: linux
  rootless: true
  slirp4netns:
    Executable: /bin/slirp4netns
    Package: slirp4netns-0.4.2-3.git21fdece.module_el8.2.0+305+5e198a41.x86_64
    Version: |-
      slirp4netns version 0.4.2+dev
      commit: 21fdece2737dc24ffa3f01a341b8a6854f8b13b4
  uptime: 566h 9m 11.03s (Approximately 23.58 days)
registries:
  blocked: null
  insecure: null
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  ConfigFile: /home/fireworks/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /bin/fuse-overlayfs
      Package: fuse-overlayfs-0.7.2-5.module_el8.2.0+305+5e198a41.x86_64
      Version: |-
        fuse-overlayfs: version 0.7.2
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  GraphRoot: /home/fireworks/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 0
  RunRoot: /tmp/run-1003
  VolumePath: /home/fireworks/.local/share/containers/storage/volumes

If explicitly specifying runroot on the command line, there is the

$ podman --runroot /run/user/1003 info 
Error: could not get runtime: database storage temporary directory (runroot) "/tmp/run-1003" does not match our storage 
`temporary directory (runroot) "/run/user/1003": database configuration mismatch

I am at a loss. Some clear errors and warnings plus concise documentation on the (default) behavior would help a lot.

TomSweeneyRedHat commented 4 years ago

We'd be very grateful if you could spin up a PR with an addition to the Troubleshooting guide. Or if you'd rather, if you want to send along an e-mail with what should be in the guide for this issue, I can throw the PR together.

jotelha commented 4 years ago

I might put that together. For the record: The last bit necessary was an rm ~/.local/share/containers/storage/libpod/bolt_state.db to have podman accept the modified runroot. Is that documented somewhere? Would that be the same behavior for the current version? That is something I cannot test.

TomSweeneyRedHat commented 4 years ago

I'll have to lean on @mheon about the boltdb doc and behavior. Thoughts Matt?

mheon commented 4 years ago

The Boltdb bit is expected - we won't let you swap storage paths for existing containers (bad things can happen if we swap directories mid-flight - files and directories we create can get lost, causing unexpected behavior. The only real way of migrating is to run podman system reset and effectively wipe all existing state (we recognize this isn't very convenient, but this is the best we can do for now)

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 3 years ago

I am going to close this do to lack of movement, Reopen if you want to add documentation.