containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.27k stars 2.37k forks source link

Container re-creation does not work when used with docker-compose and dependencies #23898

Open andrin55 opened 2 weeks ago

andrin55 commented 2 weeks ago

Issue Description

When using multiple containers in a compose file which are dependent on each other and share the same network namespace, they cannot be recreated. This only happens in Podman and not in Docker.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Start the containers with "docker-compose up -d" in the following compose
    services:
    container1:
    image: nginx:latest
    container_name: container1
    networks:
      - nginx_network
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost"]
      interval: 10s
      timeout: 5s
      retries: 5
    container2:
    image: alpine:latest
    command: sleep infinite
    container_name: container2
    depends_on:
      container1:
        condition: service_healthy
    network_mode: "service:container1"
    networks:
    nginx_network:
  2. The containers will be started successfully
  3. Now force a container re-creation to simulate an update to a container config or image: docker-compose up -d --always-recreate-deps --force-recreate
  4. It should produce an error

Describe the results you received

The container re-creation fails with: Error response from daemon: container 34e683d853c2a8a334ba1fc74c0801a982b3a40fa3d9156dc91ed3960ccf2d0f has dependent containers which must be removed before it: 1b075e6e5b99890c0366d78d8d0fb83d2c9e48076c95b393008e31d5783a2891: container already exists

Describe the results you expected

The containers should be recreated as usually. This works on docker-ce (tested on version 24.0.9).

podman info output

host:
  arch: amd64
  buildahVersion: 1.33.8
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.10-1.el9.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: fb8c4bf50dbc044a338137871b096eea8041a1fa'
  cpuUtilization:
    idlePercent: 99.4
    systemPercent: 0.2
    userPercent: 0.4
  cpus: 2
  databaseBackend: boltdb
  distribution:
    distribution: rhel
    version: "9.4"
  eventLogger: journald
  freeLocks: 2044
  hostname: localhost
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.14.0-427.33.1.el9_4.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 628297728
  memTotal: 3803070464
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.10.0-3.el9_4.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.10.0
    package: netavark-1.10.3-1.el9.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.10.3
  ociRuntime:
    name: crun
    package: crun-1.14.3-1.el9.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.14.3
      commit: 1961d211ba98f532ea52d2e80f4c20359f241a98
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.3-1.el9.x86_64
    version: |-
      slirp4netns version 1.2.3
      commit: c22fde291bb35b354e6ca44d13be181c76a0a432
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 4214996992
  swapTotal: 4215271424
  uptime: 127h 49m 32.00s (Approximately 5.29 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 2
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 17546870784
  graphRootUsed: 9354690560
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 4
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.9.4-rhel
  Built: 1723107101
  BuiltTime: Thu Aug  8 10:51:41 2024
  GitCommit: ""
  GoVersion: go1.21.11 (Red Hat 1.21.11-1.el9_4)
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.4-rhel

Podman in a container

No

Privileged Or Rootless

Privileged

Upstream Latest Release

No

Additional environment details

Docker Compose version v2.26.0

Additional information

No response

mheon commented 1 week ago

Hm. Don't know how much we can help here. Dependencies being hard is baked deep into libpod, so replacing a container that has other containers depending on it is not possible in our design.

andrin55 commented 1 week ago

Maybe it could remove and then re-create them instead of failing?