Closed PavelSosin-320 closed 3 years ago
"forwardPorts"
is not using Docker's network capabilities. It uses VS Code's tunneling feature. Which property are you referring to?
This is exactly the issue in the dev-container documentation: port forwarding is the only place that looks like the only Docker or Podman network configuration. To make running dev-container accessible via TCP IP network the complete configuration is required although CNI implementation does all the job behind the scene. It is done in 2 steps:
Are you asking us to split the functionality behind "forwardPorts"
up?
Yes, I think this is right because the functionality is different in the case of remote server access and Docker / Podman. In the last case, it is a pure Docker / Podman networking configuration. The only parameter name is common. Container networking terminology is a part of the agreed, widely adopted interface industry standard. It defines semantics for every term. I believe that the developer doesn't think too much about what happens when network create, -docker run --driver=bridge, --publish are used but the result is absolutely clear. The details can be seen using docker ls, podman ports, network inspect, container inspect,
"forwardPorts"
is using docker exec
to establish a connection to the container and forward any connections to a port on localhost into the container. docker exec
might be based on what you explain, but docker exec
does not change or configure it. If anything we could consider renaming the property to better reflect what it does.
I don't propose to change implementation. I don't like to write documentation. But evrything related to Docker container port usage is already documented and explained with examples covering 90% use-cases. Docker exec uses Docker engine URL which can be tcp port 2375, 2376, unix domain socket, named pipe and, even gRpc socket if naked dockerd is used. It had been already configured in daemon.json config file before Docker is started. User has to think about available port early or actually never if Docker runs on Linux if Fd:// docker host is configured to start dockerd as systemd service. On the client side docker and podman exec use docker context or podman connection. Docker exec containerID communicates with container via allocated tty, i.e. no port to config. Even when Client and container run on the same host client has no chance to communicate with container using TCP /SSH directly bypassing Docker's network driver, bridge by default. Dockerfile can EXPOSE port 22 for SSH for all images. Later --publish can map container port 22 to the HostPort 220, etc. this is really dynamic. But terminology is publish SSH port on Local/Host/Overlay network name, bridge if ommited, as port 220.
Yet another parameter is required to make dev-container port publishing compatible with Podman: "Podman.usermode.rroot-full" : boolean. Podman is able to run containers without root privileges and still separate the execution environment of users, unlike Docker. But Podman networking for the root-full and rootless containers is different. It will be required to designate explicitly the Podman user mode. In other words, if "sudo -i" is necessary before starting to use local Podman or connect to remote Podman as a user with root privileges.
@chrmarti Testing locally installed on my laptop Podman+Buildah during already several months I still can't understand why Dev-container configuration and Infrastructure configuration must be squeezed into a single file. I see many issues where users cry " Why I need to edit my project when I'roaming from server to server, from local infrastructure to Cloud infrastructure. from Docker container runtime to another Docker container runtime, from the development environment to the testing environment? Every project can have more than one target. Every target has its own networking layout: end-points, available ports, and port ranges. The container running locally and on Azure infrastructure is exactly the same container. The workspace located on the Laptop and the workspace located inside Docker volume running on the Azure infrastructure is precisely the same workspace.
Hey @chrmarti, this issue might need further attention.
@PavelSosin-320, you can help us out by closing this issue if the problem no longer exists, or adding more information.
This issue has been closed automatically because it needs more information and has not had recent activity. See also our issue reporting guidelines.
Happy Coding!
Port forwarding in devcontainer.json file has the wrong name and wrong documentation. All OCI runtimes use the Container Networking Interface CNI implementations i.e. drivers but always the same terminology and semantics. Port forwarding from the container to the host is done internally but it is not the main function. It does many other things including the creation of firewall rules, port mapping, ip address assignment, etc. The currently used term has nothing in common with what it does. The simple list of ports doesn't simplify the correct usage of container networking but only denies the developer from the correct networking definition. Ports "Exposed" in Dockerfile are known inside containers and used when the developer develops TCP servers running inside Docker containers and listen to some ports. They are only metadata. This is not Dev-container case. They are not usable unless CNI parameters are supplied during run of the container. This is a fake promise. There are no "Temporary" forwarded ports in docker. After Container or Docker daemon restart existing containers are restored with the same CNI parameters. Podman even has nothing to restart to forget "temporary" mapping. I suppose that clear separation of Networking parameters from Container image creation with references to the documentation will help users more than current simplification and unlock the Docker capabilities.