containers / podman-compose

a script to run docker-compose.yml using podman
GNU General Public License v2.0
4.86k stars 465 forks source link

Podman-compose - Nextcloud - (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80 #206

Closed osirase closed 2 years ago

osirase commented 4 years ago

So this only occurs when running in my fully populated docker-compose.yml file. It works when it is launched on its own just with podman:

podman run -d \
-p 1337:80 \
-v ~/containerVolumes/nextcloud/:/var/www/html:z \
-v ~/containerVolumes/nextcloud/apps:/var/www/html/custom_apps:z \
-v ~/containerVolumes/nextcloud/config:/var/www/html/config:z \
-v ~/containerVolumes/nextcloud/data:/var/www/html/data:z \
-v ~/containerVolumes/nextcloud/theme:/var/www/html/themes:z \
nextcloud

And in a docker-compose.yml file on its own.

❯ cat docker-compose.yml
---
version: "2.0"
services:
  nextcloud:
    image: nextcloud
    container_name: nextcloud
    ports:
      - 1000:80
    volumes:
      - ./nextcloud:/var/www/html:z
      - ./nextcloud/apps:/var/www/html/custom_apps:z
      - ./nextcloud/config:/var/www/html/config:z
      - ./nextcloud/data:/var/www/html/data:z
      - ./nextcloud/themes:/var/www/html/themes:z
    restart: unless-stopped

However, when run with my full docker-compose.yml:

---
version: "3.0"
services:
  nginx:
    image: nginx
    container_name: nginxReverseProxy
    ports:
      - 8080:80
      - 4443:443
    volumes:
      - ./nginx/conf:/etc/nginx:ro
      - ./nginx/html:/usr/share/nginx/html:ro
      - ./nginx/log:/var/log/nginx:z
#      - ./nginx/pki
    restart: unless_stopped
  jellyfin:
    image: linuxserver/jellyfin
    container_name: jellyfin
    environment:
      - PUID=1000
      - PGID=1000
#      - UMASK_SET=022 #optional
    volumes:
      - ./jellyfin/library:/config:z
      - ./jellyfin/tvseries:/data/tvshows:z
      - ./jellyfin/movies:/data/movies:z
#      - /opt/vc/lib:/opt/vc/lib #optional
    ports:
      - 8096:8096
      - 8920:8920 #optional
#    devices:
#      - /dev/dri:/dev/dri #optional
#      - /dev/vcsm:/dev/vcsm #optional
#      - /dev/vchiq:/dev/vchiq #optional
#      - /dev/video10:/dev/video10 #optional
#      - /dev/video11:/dev/video11 #optional
#      - /dev/video12:/dev/video12 #optional
    restart: unless-stopped
  nextcloud:
    image: nextcloud
    container_name: nextcloud
    ports:
      - 1337:80
    volumes:
      - ./nextcloud:/var/www/html:z
      - ./nextcloud/apps:/var/www/html/custom_apps:z
      - ./nextcloud/config:/var/www/html/config:z
      - ./nextcloud/data:/var/www/html/data:z
      - ./nextcloud/themes:/var/www/html/themes:z
    restart: unless-stopped

The output of this is one container exits early with the error:

❯ podman logs 5ed912b7a69600ef408aaf1cf5aafc9552df33571bbc401ec02b672acc908d84
AH00557: apache2: apr_sockaddr_info_get() failed for 5ed912b7a696
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message
(98)Address already in use: AH00072: make_sock: could not bind to address [::]:80
(98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down

So it seems like a straight forward error... something is occupying port 80... but within the container itself?

To be safe I checked with ss -tulpn, to see if anything on the host was bound to 80 if somehow that was interfering. The result was nothing,

ss -tulpn | grep -i 80

The other culprit was nginx... But that's 'internally' binding to 80 within it's container, right?? To be safe. I changed this also to 81. Then did a docker compose down and up.

❯ podman pod inspect containerVolumes
{
     "Config": {
          "id": "2217f14c37ececfe54a58407d2cc308ea18ecf44d0d7175c5a2a1bc7a585b6aa",
          "name": "containerVolumes",
          "hostname": "containerVolumes",
          "labels": {

          },
          "cgroupParent": "/libpod_parent",
          "sharesCgroup": true,
          "sharesNet": true,
          "infraConfig": {
               "makeInfraContainer": true,
               "infraPortBindings": [
                    {
                         "hostPort": 1338,
                         "containerPort": 81,
                         "protocol": "tcp",
                         "hostIP": ""
                    },
                    {
                         "hostPort": 8096,
                         "containerPort": 8096,
                         "protocol": "tcp",
                         "hostIP": ""
                    },
                    {
                         "hostPort": 8080,
                         "containerPort": 80,
                         "protocol": "tcp",
                         "hostIP": ""
                    },
                    {
                         "hostPort": 8920,
                         "containerPort": 8920,
                         "protocol": "tcp",
                         "hostIP": ""
                    },
                    {
                         "hostPort": 4443,
                         "containerPort": 443,
                         "protocol": "tcp",
                         "hostIP": ""
                    }

The one on 8080 host and 80 container is the nextcloud container that's throwing the error, failing to start... The top one is the nginx container which I incremented both ports by 1 just to be safe.

Still the same error.

I did notice the output is kind of odd for podman ps for the containers in the pod.

❯ podman ps
CONTAINER ID  IMAGE                                  COMMAND               CREATED         STATUS             PORTS                                                                                      NAMES
7588dc03e591  docker.io/linuxserver/jellyfin:latest                        14 seconds ago  Up 10 seconds ago  0.0.0.0:8080->80/tcp, 0.0.0.0:1338->81/tcp, 0.0.0.0:4443->443/tcp, 0.0.0.0:8096->8096/tcp  jellyfin
3d96a6fad046  docker.io/library/nginx:latest         nginx -g daemon o...  15 seconds ago  Up 11 seconds ago  0.0.0.0:8080->80/tcp, 0.0.0.0:1338->81/tcp, 0.0.0.0:4443->443/tcp, 0.0.0.0:8096->8096/tcp  nginxReverseProxy

Why does each container show the complete list of host to container port mappings? Shouldn't it be individual?

Any input would be appreciated.

kronenpj commented 3 years ago

I believe this is coming about because the nextcloud and nginx containers both declare/request port 80 in the pod via the ports: declaration. My docker-compose file doesn't advertise any ports for the nextcloud container and has the nginx configuration point to port 9000 on the nextcloud container.

Dacit commented 3 years ago

Any fix for this?

kronenpj commented 3 years ago

This is a copy of my docker-compose.yml file:

version: '3'

services:
  app:
    image: nextcloud:fpm-alpine
    restart: always
    user: www-data
    volumes:
      - nextcloud:/var/www/html:z
      - nc_apps:/var/www/html/custom_apps:z
      - nc_data:/var/www/html/data:z
    environment:
      - POSTGRES_HOST=db
      - POSTGRES_DB=nextcloud
    env_file:
      - db.env
    depends_on:
      - db
    healthcheck:
      test: ["CMD-SHELL", "curl -k -X POST https://web/index.php/login/v2 || exit 1"]
      interval: 60s
      timeout: 10s
      retries: 10

  db:
    image: postgres:12-alpine
    restart: always
    volumes:
      - postgres_12:/var/lib/postgresql/data:z
    env_file:
      - db.env
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -d nextcloud -U postgres"]
      interval: 60s
      timeout: 10s
      retries: 10

  web:
    image: nginx:alpine
    restart: always
    ports:
      - 8099:80
      - 9443:443
    volumes:
      - nextcloud:/var/www/html:rw
      - nc_apps:/var/www/html/custom_apps:ro
      - nc_data:/var/www/html/data:ro
      - ./nginx_conf:/etc/nginx:ro,z
      - vhost.d:/etc/nginx/vhost.d
    environment:
      - DEFAULT_HOST=server.example.com
      - VIRTUAL_HOST=server.example.com
    depends_on:
      - app

volumes:
  postgres_12:
  nextcloud:
  nc_apps:
  nc_data:
  vhost.d:
Dacit commented 3 years ago

Thanks.

Still, is there any solution in podman or podman-compose (other than using a different image)? After all, it's strange that the internal port 80 of the docker images can't be bound when it isn't used anywhere for container-to-container communication.

helge000 commented 3 years ago

@kronenpj , thanks! As a side note, leaned about healthcheck: directive today :)

nickcolea commented 3 years ago

Same issue while running from a kube file. No ports occupy port 80, except for Nextcloud itself and changing/removing the port config still fails.

Dacit commented 3 years ago

Interesting. Is there an issue for this in the k8s project as well?

nickcolea commented 3 years ago

Interesting. Is there an issue for this in the k8s project as well?

Don't know. I am using Podman on CentOS 8.3 with a deployment file that is tweaked to work with podman play kube command.

frenzymadness commented 3 years ago

I probably have the same issue. I'm trying to start three containers: web with PHP, db with MySQL and PHPMyAdmin. The problem seems to be that both web and PHPMyAdmin are configured to listen on port 80 which I am able to map to completely different ports on my host machine but they still seem to conflict in some way in podman-compose internally.

version: '3'

services:
  db:
    image: mysql:5.7
    container_name: db
    environment:
      MYSQL_ROOT_PASSWORD: my_secret_pw
      MYSQL_DATABASE: test_db
      MYSQL_USER: devuser
      MYSQL_PASSWORD: devpass
    volumes:
      - ./database/:/var/lib/mysql:z
    ports:
      - "9906:3306"
  web:
    image: php:7.4-apache
    container_name: web
    depends_on:
      - db
    volumes:
      - ./php/:/var/www/html/:z
    ports:
      - "8000:80"
    stdin_open: true
    tty: true
  phpmyadmin:
    image: phpmyadmin/phpmyadmin
    container_name: pma
    links:
      - db
    environment:
      PMA_HOST: db
      PMA_PORT: 3306
      PMA_ARBITRARY: 1
    ports:
      - "8081:80"

web starts fine but apache from the phpmyadmin container complains:

podman start -a pma
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
(98)Address already in use: AH00072: make_sock: could not bind to address [::]:80
(98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down

Is there anything wrong with my setup?

Dacit commented 3 years ago

Unfortunately this seems to be a limitation of podman-compose as of right now. You can't bind to the same port (80) inside the containers. I do appreciate all the work put into podman-compose but sadly I had to switch back to docker-compose because of issues such as this one.

septatrix commented 3 years ago

It too was very confused by the ports being listed for each container as exposed. This seems to be due to how they are published with podman-compose (e.g. podman pod create --name=moodle-docker --share net -p 127.0.0.1:8000:80). As a result I too am unable to convert https://github.com/moodlehq/moodle-docker to run with podman

septatrix commented 3 years ago

Relevant: https://stackoverflow.com/questions/60558185/is-it-possible-to-run-two-containers-in-same-port-in-same-pod-by-podman Apparently this would require to create the pod in a different manner. I am not sure how doable this is (or how kompose convert from kubernetes would handle such a compose file). However I think that fixing this issue is important as it is not too uncommon to have e.g. different services running on :8080

SadPencil commented 2 years ago

I also have this problem. Here is a minimal reproducing example:

networks:
  test-net:

services:
  node1:
    image: python
    ports:
      - "8001:80"
    command: python3 -m http.server 80
    networks:
      - test-net

  node2:
    image: python
    ports:
      - "8002:80"
    command: python3 -m http.server 80
    networks:
      - test-net

podman-compose version 0.1.7dev: failed, because one of the two container can't bind port 80 as 'address already in use' docker-compose version 1.25.0: success

I can get these containers running by not relying on podman-compose:

podman network create testing
podman run -d --name test1 --network testing -p 8001:80 python python3 -m http.server 80
podman run -d --name test2 --network testing -p 8002:80 python python3 -m http.server 80

However I am struggling on podman-compose :(

SadPencil commented 2 years ago

According to previous comments this issue can be workarounded by creating multiple pods for conflicting containers (but it brings more problems). However the pod name and count is hardcorded inside the source code.

muayyad-alsadi commented 2 years ago

podman-compose version 0.1.7dev: failed,

why you are using this too old compose? the latest stable pip version is 1.x

I can get these containers running by not relying on podman-compose: podman network create testing

when 0.1.x was written there was no rootless inter-container communication, we had to put all containers of same stack on a shared network namespace and let them communicate via localhost (that's why 0.1.x can't listen on same port).

please upgrade to 1.x

SadPencil commented 2 years ago

Upgrading to 1.x solves this issue. Sorry for my mistake.

Obsolete issue should be closed now.