Closed 4383 closed 5 years ago
Looks like it may be related to this: https://discussion.fedoraproject.org/t/toolbox-broken-again-crun-update-in-31-20191112-0/11369/19
Two bugs in one day, yay! I'm not really sure what I'm supposed to do here. I can reboot to clear the OCI error until I create a container of the same name as one that has already been deleted. When I workaround that with rm --force --storage
, it triggers another OCI error.
This seems like it could be a crun issue - @giuseppe
Regardless, this one is (probably) not Podman.
I hit this problem again: After a reboot, I run some scripts to create pod.
+(./04_setup_ironic.sh:128): sudo podman run -d --net host --privileged --name httpd --pod ironic-pod -v /opt/dev-scripts/ironic:/shared --entrypoint /bin/runhttpd quay.io/metal3-io/ironic:master
Error: error creating container storage: the container name "httpd" is already in use by "0bbbfaecbbb46a0ad51b786dd8a7e439868a15d35091c6e24953362a36d0db18". You have to remove that container to be able to reuse that name.: that name is already in use
# podman ps
# podman pod ps
POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID
5840254ebc5c ironic-pod Created 2 minutes ago 1 cd0aa1806e0b
# podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cd0aa1806e0b k8s.gcr.io/pause:3.1 2 minutes ago Created 5840254ebc5c-infra
# uname -a
Linux aa 3.10.0-1126.el7.x86_64 #1 SMP Mon Feb 3 15:30:44 EST 2020 x86_64 x86_64 x86_64 GNU/Linux
So I delete some files and make it works again
# rm -rf /var/lib/containers/storage/libpod/bolt_state.db
# rm -rf /var/lib/containers/storage/
@shlao which version of podman are you running? I found that podman >= 1.7.0 fixed this issue for me. F31 is already at 1.8.0 but it looks like you are using CentOS 7.
[root@rgw-5 ~]# /usr/bin/podman stop ceph-osd-189
Error: no container with name or ID ceph-osd-189 found: no such container
[root@rgw-5 ~]#
[root@rgw-5 ~]# podman ps -a | grep -i ceph-osd-189
[root@rgw-5 ~]#
[root@rgw-5 ~]# podman version
Version: 1.6.4
RemoteAPI Version: 1
Go Version: go1.13.4
OS/Arch: linux/amd64
[root@rgw-5 ~]#
[root@rgw-5 ~]# /usr/share/ceph-osd-run.sh 189
Error: error creating container storage: the container name "ceph-osd-189" is already in use by "30b07795d6c1e9d62e5cd82848e231c9e9803e5bcfdaf15a9af166caab36a673". You have to remove that container to be able to reuse that name.: that name is already in use
[root@rgw-5 ~]#
[root@rgw-5 ~]#
[root@rgw-5 ~]# podman ps -a | grep -i ceph-osd-189
[root@rgw-5 ~]#
[root@rgw-5 ~]#
root@rgw-5 ~]# cat /usr/share/ceph-osd-run.sh
#!/bin/bash
# Please do not change this file directly since it is managed by Ansible and will be overwritten
########
# MAIN #
########
/usr/bin/podman run \
--rm \
--net=host \
--privileged=true \
--pid=host \
--ipc=host \
--cpus=4 \
-v /dev:/dev \
-v /etc/localtime:/etc/localtime:ro \
-v /var/lib/ceph:/var/lib/ceph:z \
-v /etc/ceph:/etc/ceph:z \
-v /var/run/ceph:/var/run/ceph:z \
-v /var/run/udev/:/var/run/udev/ \
-v /var/log/ceph:/var/log/ceph:z \
-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \
-e CLUSTER=ceph \
-v /run/lvm/:/run/lvm/ \
-e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \
-e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \
-e OSD_ID="$1" \
--name=ceph-osd-"$1" \
\
registry.redhat.io/rhceph/rhceph-4-rhel8:latest
[root@rgw-5 ~]#
Patches for this were landed in 1.7.0, and should be in RHEL 8.2.1 (which will include a 1.9.x release of Podman).
@mheon Do you know when / if this will land in RHEL/CentOS 7.9?
I'm running ceph via cephadm and podman and I'm having to restart the host after every container gets upgraded because of this issue.
There are no plans for further Podman releases on Cent/RHEL 7 - I believe 1.6.4 in 7.8 will be the last.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
A script launch the following command to start a container with the
rm
flag so the contianer will be destroyed at exit but when I try to recreate a container manually with the same podman command, podman fail to create the container and display the following error:When I try to inspect for an existing volume or something like that I doesn't found any results:
Look like similar to #1359
Steps to reproduce the issue:
podman run --rm
command twicesDescribe the results you received:
Describe the results you expected:
I'm waiting for a container creation
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info
:Additional environment details (AWS, VirtualBox, physical, etc.): KVM