microsoft / vscode-remote-release

Visual Studio Code Remote Development: Open any folder in WSL, in a Docker container, or on a remote machine using SSH and take advantage of VS Code's full feature set.
https://aka.ms/vscode-remote
Other
3.62k stars 279 forks source link

Check pre-installed containers running on the same Docker engine to avoid port conflicts #3979

Closed PavelSosin-320 closed 3 years ago

PavelSosin-320 commented 4 years ago

Please, check that there are no containers pre-installed on the same Docker engine and use the same port. Pre-requisite: enable network creation/configuration to avoid random dev-containers failures. Some Docker engine providers pre-install some Containers in the Docker engine which use the pre-defined ports. The example is MongoDB: RedHat gives it with CentOS and Docker maybe for the demo purposes or for fun. The port on the Doker host can be busy because it is default port. Attempt to run dev-container will fail randomly. Unpleasant, but docker ps is unavoidable before deployment. I think that it is not fair to require the usage of only blank engines. Sorry, docker always comes with the docker networking and every container has to respect the rules. To completely isolate docker dev-container None driver must be used but what about testing? To partially isolate dev container a vs-code-dev attachable network has to be created instead of default bridge . and dev-containers have to be connected to it on-the-fly.

Chuxel commented 4 years ago

The good news here is that none of the containers in this repository map ports directly to the OS anymore. The Node + Mongo definition instead uses forwardPorts in devcontainer.json which will pick a different port if the specified port is already in use. I think that resolves this?

PavelSosin-320 commented 3 years ago

@Chuxel Unfortunately, nothing can help user developer if developer doesn't know what he/she does. The very common scenario is developer wants to test several images providing HTTP services and every image exposes container port 80 as most such images. The the common way to do i them on the same engine mapping port exposed 80 to ports 7070, 8080, 9090, etc. The port exposed in the docker file only designates the kind of provided interface, HTTP, HTTPS, JDBC. In the default scenario when container is attached to the bridge network this port can be mapped to the same port on the bridge it ends in conflict even before port node port binding because Docker engine will try to create chains in underlaying iptables configuration during container instance creation. This is one of the reasons to follow the safe rule: one container -> one role -> one interface one exposed port -> one mapped port per network. This scenario is supported by Swarm out-pf-the-box because the same ports are mapped by the same manner for all instances of service without conflicts by the Docker engine. If Swarm mode is activated containers are attached by default to ingress network. All this information is supplied by docker ps docker then docker container inspect and docker network inspect that return json formatted output if --format option is used but it is not a single line solution, see all ports: _docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' $INSTANCEID Specific port check: docker inspect --format='{{(index (index .NetworkSettings.Ports "8787/tcp") 0).HostPort}}' $INSTANCE_ID Podman supports mapping of port intrervals but shortcut -p 80:80 doesn't work in this scenario. --expose option has to be used

Chuxel commented 3 years ago

Please post the specific error you are running into. Forward ports is intended to be used so that you don't have to worry about any of this by forwarding the port rather than publishing it. It automatically uses the appropriate local port. You shouldn't hit port conflicts with it.

PavelSosin-320 commented 3 years ago

Playing with Podman after declaration of the same port 80 in .devcontainer.json and trying to run dev-container base and nginx in the same pod I got: pavel@MSI:/mnt/c/Users/Pavel$ sudo podman run --pod devcontainerfocalBV22 nginx /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Configuration complete; ready for start up 2020/11/07 15:37:25 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) 2020/11/07 15:37:25 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) 2020/11/07 15:37:25 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) 2020/11/07 15:37:25 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) 2020/11/07 15:37:25 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) 2020/11/07 15:37:25 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) 2020/11/07 15:37:25 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) 2020/11/07 15:37:25 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) 2020/11/07 15:37:25 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) 2020/11/07 15:37:25 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) 2020/11/07 15:37:25 [emerg] 1#1: still could not bind() ........ Podman does all checks very strict. The correct Output is: sudo podman run --pod devcontainerfocalBV22 nginx /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Configuration complete; ready for start up ssudo podman pod ls sudo podman container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1742561a73c5 docker.io/library/nginx:latest nginx -g daemon o... 29 minutes ago Up 29 minutes ago 0.0.0.0:222->22/tcp optimistic_tu cc318a2159da mcr.microsoft.com/vscode/devcontainers/base:latest bash 7 hours ago Up 32 minutes ago 0.0.0.0:222->22/tcp nervous_shaw eb8ee322af0f k8s.gcr.io/pause:3.2 7 hours ago Up 32 minutes ago 0.0.0.0:222->22/tcp 228021db5cd7 After it my nginx is fully functional: pavel@MSI:/mnt/c/Users/Pavel$ curl localhost:8080 <!DOCTYPE html>

Welcome to nginx! ......................................................................................

Welcome to nginx!

For Docker port check is required $ **docker port test** 7890/tcp -> 0.0.0.0:4321 9876/tcp -> 0.0.0.0:1234 For Podman port check is required: **sudo podman port -l** 80/tcp -> 0.0.0.0:8080 // Means nginx port 80 is already mapped to host port 8080
PavelSosin-320 commented 3 years ago

I tested Docker and Podman behavior side-by-side in the cases of port mapping conflicts in the following scenaria: Using consequence of 2 attempts of 2 runs of the images with the same host port. docker run -dt --name nginx80 -p 8080:80 & docker run -dt --name nginx82 -p 8080:80

1.Docker C:\ProgramData\chocolatey\lib\docker-cli\tools\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint nginx2 (94cb0e36ded5d28a01c4365c4e599ffc713851c61f977db9ca0bab28511d19eb): Bind for 0.0.0.0:880 failed: port is already allocated. The second container instance is stillborn.

  1. Podman sudo podman pod create --name nginx80 -p 8080:80 odman run --pod nginx80 nginx sudo podman pod create --name nginx81 -p 8080:80 sudo podman run --pod nginx81 nginx ERRO[0000] error starting some container dependencies ERRO[0000] "cannot listen on the TCP port: listen tcp4 :8080: bind: address already in use" Error: error starting some containers: internal libpod error error starting container 8c98964ecd7995ce89aef759014896c0d35b1a5004b4d21f1623e4b721058ae4: cannot listen on the TCP port: listen tcp4 :8080: bind: address already in use Error: error starting container ad0eff42c0faf82653fcfdab48affda95357899bf5f7eeced69325f12e16859e: a dependency of container ad0eff42c0faf82653fcfdab48affda95357899bf5f7eeced69325f12e16859e failed to start: container state improper The way to know which ports are in use is sudo podman port -l 80/tcp -> 0.0.0.0:8080 This similarity is expected because both Docker and Podman use the same CNI version today. To avoid creation of stillborn devcontainers and rain of issues either configuration check or dryrun capabilities has to be provided.
PavelSosin-320 commented 3 years ago

The list enjoiable Docker and Podman feature is that all Containers, networks and Pods definitions are statefull. Even Containers, Networks and Pods which were created stillborn never disapper completly after Docker engine restart and Podman machine re-login. All containers which were created in the past will be recreated according to the stored in the Docker and Podman persistency. If definition was incorrect the containers will be recreated stillborn again unless user will remove them manually. Reboot of Docker VM or Podman machine will help if Checkpoints creation is prohited in the configuration.

chrmarti commented 3 years ago

We first try to use the same port as in the container and only if that is already taken do we fall back to using a random port.

You seem to have started the dev container first and it found that port is unused.

PavelSosin-320 commented 3 years ago

@chrmarti I tryed different sequences and number of containers but it really doesn't matter. Docker and Podman instantiate containers asynchroniously, i.e. in the random sequence after restart. The availability check is up to orchestration engine which uses health-check. For Podman case it can be confusing because it allows Pod and its Containers in the different states. It tryies to run all entry-points scripts found in the Containers meta asynchroniously, i.e. some containers can be started and "attachable" when other failed to start. It ends with very Docker/Podman specific error messages like "can't find parent PID" or "can't create cgroup". I don't think who may need random ports if ports are tightly coupled with application L7 protocols. With firewalls built-in into Linux and Linux Kernels including Microsoft's. User will never find the answer "How to enable random port for random protocol using nftables ver... Linux Ubuntu 20.04 or CentOS8.2?" question using Google. and "How to configure Norton to enable random protocol via random port?" Finally, The --expose-port option in Podman has no "random" variant, only specific port,range mapping Format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort Only Host port can be randomized according very strict Podman rules for rootfull and rootless containers. Podman reads Container image metadata before port mapping Podman is derived from Kubernetes and in Kubernetes all Pod's communication is filtered by Protocol !!! like in all modern Firewalls which I know. The words "random port" means "random protocol". There are many sites publisnig protocol-port mapping like Protocol-port mapping. In the case of Pod ports conflicts the Pod has to be re-created and ports are re-mapped, not the image.

PavelSosin-320 commented 3 years ago

@chrmarti FYI! Linux firewall in 10 minutes This is both Docker and Podman security managed by Network drivers. sudo podman network ls NAME VERSION PLUGINS podman 0.4.0 bridge,portmap,firewall,tunin podman network inspect podman [ { "cniVersion": "0.4.0", "name": "podman", "plugins": [ { "bridge": "cni-podman0", "hairpinMode": true, "ipMasq": true, "ipam": { "ranges": [ [ { "gateway": "10.88.0.1", "subnet": "10.88.0.0/16" } ] ], "routes": [ { "dst": "0.0.0.0/0" } ], "type": "host-local" }, "isGateway": true, "type": "bridge" }, { "capabilities": { "portMappings": true }, "type": "portmap" }, { "type": "firewall" }, { "type": "tuning" } ] } ] Everything else is thought as impossible.

chrmarti commented 3 years ago

This looks like Remote-Containers and VS Code are not involved at that stage. Is this an upstream issue?

github-actions[bot] commented 3 years ago

This issue has been closed automatically because it needs more information and has not had recent activity. See also our issue reporting guidelines.

Happy Coding!