containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.8k stars 2.42k forks source link

Impossible to recreate a container with the same name that a container already removed #2240

Closed 4383 closed 5 years ago

4383 commented 5 years ago

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

A script launch the following command to start a container with the rm flag so the contianer will be destroyed at exit but when I try to recreate a container manually with the same podman command, podman fail to create the container and display the following error:

$ podman run --rm --name nova_cellv2_discover_hosts -it --label config_id=tripleo_step5 --label container_name=nova_cellv2_discover_hosts --label managed_by=paunch --net=host --user=root --volume=/etc/hosts:/etc/hosts:ro --volume=/etc/localtime:/etc/localtime:ro --volume=/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume=/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume=/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume=/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume=/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume=/dev/log:/dev/log --volume=/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume=/etc/puppet:/etc/puppet:ro --volume=/var/lib/config-data/nova_libvirt/etc/my.cnf.d/:/etc/my.cnf.d/:ro --volume=/var/lib/config-data/nova_libvirt/etc/nova/:/etc/nova/:ro --volume=/var/log/containers/nova:/var/log/nova --volume=/var/lib/docker-config-scripts/:/docker-config-scripts/ 192.168.122.1:5000/fedora-binary-nova-compute:ospsprint 
error creating container storage: the container name "nova_cellv2_discover_hosts" is already in use by "5efe2260d1aaadf63e8ce70d0aca100472bb0e0ee90884e95c785821a37d694c". You have to remove that container to be able to reuse that name.: that name is already in use

When I try to inspect for an existing volume or something like that I doesn't found any results:

$ sudo podman ps -a |grep 5efe                                                                                       
$ # no results found
$ sudo podman volume list
$ # no results found and no volumes exists

Look like similar to #1359

Steps to reproduce the issue:

  1. run the command podman run --rm command twices

Describe the results you received:

error creating container storage: the container name "nova_cellv2_discover_hosts" is already in use by "5efe2260d1aaadf63e8ce70d0aca100472bb0e0ee90884e95c785821a37d694c". You have to remove that container to be able to reuse that name.: that name is already in use

Describe the results you expected:

I'm waiting for a container creation

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 1.0.0

Output of podman info:

$ sudo podman info
host:
  BuildahVersion: 1.6-dev
  Conmon:
    package: podman-1.0.0-1.git82e8011.module+el8+2696+e59f0461.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.14.0-dev, commit: 52154d748ee9623ac65d34514ec22063d2633ac2-dirty'
  Distribution:
    distribution: '"rhel"'
    version: "8.0"
  MemFree: 382480384
  MemTotal: 16645574656
  OCIRuntime:
    package: runc-1.0.0-54.rc5.dev.git2abd837.module+el8+2650+e6b3d617.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.0'
  SwapFree: 796397568
  SwapTotal: 1073737728
  arch: amd64
  cpus: 16
  hostname: herve.localdomain
  kernel: 4.18.0-60.el8.x86_64
  os: linux
  rootless: false
  uptime: 48h 20m 18.38s (Approximately 2.00 days)
insecure registries:
  registries:
  - 192.168.122.1:5000
  - 192.168.24.2:8787
registries:
  registries:
  - registry.redhat.io
  - quay.io
  - docker.io
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 90
  GraphDriverName: overlay
  GraphOptions: null
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
  ImageStore:
    number: 28
  RunRoot: /var/run/containers/storage

Additional environment details (AWS, VirtualBox, physical, etc.): KVM

cryobry commented 4 years ago

Looks like it may be related to this: https://discussion.fedoraproject.org/t/toolbox-broken-again-crun-update-in-31-20191112-0/11369/19

Two bugs in one day, yay! I'm not really sure what I'm supposed to do here. I can reboot to clear the OCI error until I create a container of the same name as one that has already been deleted. When I workaround that with rm --force --storage, it triggers another OCI error.

mheon commented 4 years ago

This seems like it could be a crun issue - @giuseppe

Regardless, this one is (probably) not Podman.

shlao commented 4 years ago

I hit this problem again: After a reboot, I run some scripts to create pod.

+(./04_setup_ironic.sh:128): sudo podman run -d --net host --privileged --name httpd --pod ironic-pod -v /opt/dev-scripts/ironic:/shared --entrypoint /bin/runhttpd quay.io/metal3-io/ironic:master
Error: error creating container storage: the container name "httpd" is already in use by "0bbbfaecbbb46a0ad51b786dd8a7e439868a15d35091c6e24953362a36d0db18". You have to remove that container to be able to reuse that name.: that name is already in use

# podman ps
# podman pod ps
POD ID         NAME         STATUS    CREATED         # OF CONTAINERS   INFRA ID
5840254ebc5c   ironic-pod   Created   2 minutes ago   1                 cd0aa1806e0b
# podman ps -a
CONTAINER ID  IMAGE                 COMMAND  CREATED        STATUS   PORTS  NAMES
cd0aa1806e0b  k8s.gcr.io/pause:3.1           2 minutes ago  Created         5840254ebc5c-infra
# uname -a
Linux aa 3.10.0-1126.el7.x86_64 #1 SMP Mon Feb 3 15:30:44 EST 2020 x86_64 x86_64 x86_64 GNU/Linux

So I delete some files and make it works again
# rm -rf /var/lib/containers/storage/libpod/bolt_state.db
# rm -rf /var/lib/containers/storage/
cryobry commented 4 years ago

@shlao which version of podman are you running? I found that podman >= 1.7.0 fixed this issue for me. F31 is already at 1.8.0 but it looks like you are using CentOS 7.

ksingh7 commented 4 years ago
[root@rgw-5 ~]# /usr/bin/podman stop ceph-osd-189
Error: no container with name or ID ceph-osd-189 found: no such container
[root@rgw-5 ~]#
[root@rgw-5 ~]# podman ps -a | grep -i ceph-osd-189
[root@rgw-5 ~]#
[root@rgw-5 ~]# podman version
Version:            1.6.4
RemoteAPI Version:  1
Go Version:         go1.13.4
OS/Arch:            linux/amd64
[root@rgw-5 ~]#

[root@rgw-5 ~]# /usr/share/ceph-osd-run.sh 189
Error: error creating container storage: the container name "ceph-osd-189" is already in use by "30b07795d6c1e9d62e5cd82848e231c9e9803e5bcfdaf15a9af166caab36a673". You have to remove that container to be able to reuse that name.: that name is already in use
[root@rgw-5 ~]#
[root@rgw-5 ~]#
[root@rgw-5 ~]# podman ps -a | grep -i ceph-osd-189
[root@rgw-5 ~]#
[root@rgw-5 ~]#

root@rgw-5 ~]# cat /usr/share/ceph-osd-run.sh
#!/bin/bash
# Please do not change this file directly since it is managed by Ansible and will be overwritten

########
# MAIN #
########

/usr/bin/podman run \
  --rm \
  --net=host \
  --privileged=true \
  --pid=host \
  --ipc=host \
  --cpus=4 \
  -v /dev:/dev \
  -v /etc/localtime:/etc/localtime:ro \
  -v /var/lib/ceph:/var/lib/ceph:z \
  -v /etc/ceph:/etc/ceph:z \
  -v /var/run/ceph:/var/run/ceph:z \
  -v /var/run/udev/:/var/run/udev/ \
  -v /var/log/ceph:/var/log/ceph:z \
  -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \
  -e CLUSTER=ceph \
  -v /run/lvm/:/run/lvm/ \
  -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \
  -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \
  -e OSD_ID="$1" \
  --name=ceph-osd-"$1" \
   \
  registry.redhat.io/rhceph/rhceph-4-rhel8:latest
[root@rgw-5 ~]#
mheon commented 4 years ago

Patches for this were landed in 1.7.0, and should be in RHEL 8.2.1 (which will include a 1.9.x release of Podman).

diwilli commented 4 years ago

@mheon Do you know when / if this will land in RHEL/CentOS 7.9?

I'm running ceph via cephadm and podman and I'm having to restart the host after every container gets upgraded because of this issue.

mheon commented 4 years ago

There are no plans for further Podman releases on Cent/RHEL 7 - I believe 1.6.4 in 7.8 will be the last.