Closed osirase closed 2 years ago
I believe this is coming about because the nextcloud and nginx containers both declare/request port 80 in the pod via the ports: declaration. My docker-compose file doesn't advertise any ports for the nextcloud container and has the nginx configuration point to port 9000 on the nextcloud container.
Any fix for this?
This is a copy of my docker-compose.yml file:
version: '3'
services:
app:
image: nextcloud:fpm-alpine
restart: always
user: www-data
volumes:
- nextcloud:/var/www/html:z
- nc_apps:/var/www/html/custom_apps:z
- nc_data:/var/www/html/data:z
environment:
- POSTGRES_HOST=db
- POSTGRES_DB=nextcloud
env_file:
- db.env
depends_on:
- db
healthcheck:
test: ["CMD-SHELL", "curl -k -X POST https://web/index.php/login/v2 || exit 1"]
interval: 60s
timeout: 10s
retries: 10
db:
image: postgres:12-alpine
restart: always
volumes:
- postgres_12:/var/lib/postgresql/data:z
env_file:
- db.env
healthcheck:
test: ["CMD-SHELL", "pg_isready -d nextcloud -U postgres"]
interval: 60s
timeout: 10s
retries: 10
web:
image: nginx:alpine
restart: always
ports:
- 8099:80
- 9443:443
volumes:
- nextcloud:/var/www/html:rw
- nc_apps:/var/www/html/custom_apps:ro
- nc_data:/var/www/html/data:ro
- ./nginx_conf:/etc/nginx:ro,z
- vhost.d:/etc/nginx/vhost.d
environment:
- DEFAULT_HOST=server.example.com
- VIRTUAL_HOST=server.example.com
depends_on:
- app
volumes:
postgres_12:
nextcloud:
nc_apps:
nc_data:
vhost.d:
Thanks.
Still, is there any solution in podman or podman-compose (other than using a different image)? After all, it's strange that the internal port 80 of the docker images can't be bound when it isn't used anywhere for container-to-container communication.
@kronenpj , thanks! As a side note, leaned about healthcheck:
directive today :)
Same issue while running from a kube file. No ports occupy port 80, except for Nextcloud itself and changing/removing the port config still fails.
Interesting. Is there an issue for this in the k8s project as well?
Interesting. Is there an issue for this in the k8s project as well?
Don't know. I am using Podman on CentOS 8.3 with a deployment file that is tweaked to work with podman play kube
command.
I probably have the same issue. I'm trying to start three containers: web with PHP, db with MySQL and PHPMyAdmin. The problem seems to be that both web and PHPMyAdmin are configured to listen on port 80 which I am able to map to completely different ports on my host machine but they still seem to conflict in some way in podman-compose internally.
version: '3'
services:
db:
image: mysql:5.7
container_name: db
environment:
MYSQL_ROOT_PASSWORD: my_secret_pw
MYSQL_DATABASE: test_db
MYSQL_USER: devuser
MYSQL_PASSWORD: devpass
volumes:
- ./database/:/var/lib/mysql:z
ports:
- "9906:3306"
web:
image: php:7.4-apache
container_name: web
depends_on:
- db
volumes:
- ./php/:/var/www/html/:z
ports:
- "8000:80"
stdin_open: true
tty: true
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: pma
links:
- db
environment:
PMA_HOST: db
PMA_PORT: 3306
PMA_ARBITRARY: 1
ports:
- "8081:80"
web starts fine but apache from the phpmyadmin container complains:
podman start -a pma
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
(98)Address already in use: AH00072: make_sock: could not bind to address [::]:80
(98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
Is there anything wrong with my setup?
Unfortunately this seems to be a limitation of podman-compose
as of right now. You can't bind to the same port (80
) inside the containers.
I do appreciate all the work put into podman-compose
but sadly I had to switch back to docker-compose
because of issues such as this one.
It too was very confused by the ports being listed for each container as exposed. This seems to be due to how they are published with podman-compose (e.g. podman pod create --name=moodle-docker --share net -p 127.0.0.1:8000:80
). As a result I too am unable to convert https://github.com/moodlehq/moodle-docker to run with podman
Relevant: https://stackoverflow.com/questions/60558185/is-it-possible-to-run-two-containers-in-same-port-in-same-pod-by-podman
Apparently this would require to create the pod in a different manner. I am not sure how doable this is (or how kompose convert
from kubernetes would handle such a compose file). However I think that fixing this issue is important as it is not too uncommon to have e.g. different services running on :8080
I also have this problem. Here is a minimal reproducing example:
networks:
test-net:
services:
node1:
image: python
ports:
- "8001:80"
command: python3 -m http.server 80
networks:
- test-net
node2:
image: python
ports:
- "8002:80"
command: python3 -m http.server 80
networks:
- test-net
podman-compose version 0.1.7dev: failed, because one of the two container can't bind port 80 as 'address already in use' docker-compose version 1.25.0: success
I can get these containers running by not relying on podman-compose:
podman network create testing
podman run -d --name test1 --network testing -p 8001:80 python python3 -m http.server 80
podman run -d --name test2 --network testing -p 8002:80 python python3 -m http.server 80
However I am struggling on podman-compose :(
According to previous comments this issue can be workarounded by creating multiple pods for conflicting containers (but it brings more problems). However the pod name and count is hardcorded inside the source code.
podman-compose version 0.1.7dev: failed,
why you are using this too old compose? the latest stable pip version is 1.x
I can get these containers running by not relying on podman-compose:
podman network create testing
when 0.1.x was written there was no rootless inter-container communication, we had to put all containers of same stack on a shared network namespace and let them communicate via localhost (that's why 0.1.x can't listen on same port).
please upgrade to 1.x
Upgrading to 1.x solves this issue. Sorry for my mistake.
Obsolete issue should be closed now.
So this only occurs when running in my fully populated docker-compose.yml file. It works when it is launched on its own just with podman:
And in a docker-compose.yml file on its own.
However, when run with my full docker-compose.yml:
The output of this is one container exits early with the error:
So it seems like a straight forward error... something is occupying port 80... but within the container itself?
To be safe I checked with ss -tulpn, to see if anything on the host was bound to 80 if somehow that was interfering. The result was nothing,
The other culprit was nginx... But that's 'internally' binding to 80 within it's container, right?? To be safe. I changed this also to 81. Then did a docker compose down and up.
The one on 8080 host and 80 container is the nextcloud container that's throwing the error, failing to start... The top one is the nginx container which I incremented both ports by 1 just to be safe.
Still the same error.
I did notice the output is kind of odd for podman ps for the containers in the pod.
Why does each container show the complete list of host to container port mappings? Shouldn't it be individual?
Any input would be appreciated.