microsoft / vscode-remote-release

Visual Studio Code Remote Development: Open any folder in WSL, in a Docker container, or on a remote machine using SSH and take advantage of VS Code's full feature set.
https://aka.ms/vscode-remote
Other
3.67k stars 289 forks source link

Separate Port forwarding from port exposure and mapping in the dev-container file and documentation #4015

Closed PavelSosin-320 closed 3 years ago

PavelSosin-320 commented 3 years ago

Port forwarding in devcontainer.json file has the wrong name and wrong documentation. All OCI runtimes use the Container Networking Interface CNI implementations i.e. drivers but always the same terminology and semantics. Port forwarding from the container to the host is done internally but it is not the main function. It does many other things including the creation of firewall rules, port mapping, ip address assignment, etc. The currently used term has nothing in common with what it does. The simple list of ports doesn't simplify the correct usage of container networking but only denies the developer from the correct networking definition. Ports "Exposed" in Dockerfile are known inside containers and used when the developer develops TCP servers running inside Docker containers and listen to some ports. They are only metadata. This is not Dev-container case. They are not usable unless CNI parameters are supplied during run of the container. This is a fake promise. There are no "Temporary" forwarded ports in docker. After Container or Docker daemon restart existing containers are restored with the same CNI parameters. Podman even has nothing to restart to forget "temporary" mapping. I suppose that clear separation of Networking parameters from Container image creation with references to the documentation will help users more than current simplification and unlock the Docker capabilities.

chrmarti commented 3 years ago

"forwardPorts" is not using Docker's network capabilities. It uses VS Code's tunneling feature. Which property are you referring to?

PavelSosin-320 commented 3 years ago

This is exactly the issue in the dev-container documentation: port forwarding is the only place that looks like the only Docker or Podman network configuration. To make running dev-container accessible via TCP IP network the complete configuration is required although CNI implementation does all the job behind the scene. It is done in 2 steps:

  1. The image metadata contains information about ports that are served inside containers - EXPOSE directive.
  2. During the Container instantiation port range can be overridden and extended.
  3. Created container is attached to the Network, ports are mapped and firewall is configured to make containers accessible from the outside - expose and --publish options. Such configuration has the same lifetime as the container. Is it "Temporary"? This procedure is applied to every container and every port used for SSH terminal access, testing using the browser, and communication with the Host. Port randomization is allowed only inside pre-defined ranges. Maybe, it is the most complex part of Cloud applications engineering. The CNI, ie. network driver applies very strict rules depending on the User's root privileges, network type, and used protocol due to firewall dependency. Any error in CNI configuration prevents dev-container from starting with a bunch of very technical messages. Fortunately, the default values of CNI parameters can be derived from the Application layer protocol: SSH, HTTP, HTTPS, WS, WSS gRPc, and Container visibility scope: Pod or Localhost, Host machine, i.e. bridge, Overlay network, i.e. Ingress network. Container networking CNI is the organic part of CNCF as Docker container format OCI.
chrmarti commented 3 years ago

Are you asking us to split the functionality behind "forwardPorts" up?

PavelSosin-320 commented 3 years ago

Yes, I think this is right because the functionality is different in the case of remote server access and Docker / Podman. In the last case, it is a pure Docker / Podman networking configuration. The only parameter name is common. Container networking terminology is a part of the agreed, widely adopted interface industry standard. It defines semantics for every term. I believe that the developer doesn't think too much about what happens when network create, -docker run --driver=bridge, --publish are used but the result is absolutely clear. The details can be seen using docker ls, podman ports, network inspect, container inspect,

chrmarti commented 3 years ago

"forwardPorts" is using docker exec to establish a connection to the container and forward any connections to a port on localhost into the container. docker exec might be based on what you explain, but docker exec does not change or configure it. If anything we could consider renaming the property to better reflect what it does.

PavelSosin-320 commented 3 years ago

I don't propose to change implementation. I don't like to write documentation. But evrything related to Docker container port usage is already documented and explained with examples covering 90% use-cases. Docker exec uses Docker engine URL which can be tcp port 2375, 2376, unix domain socket, named pipe and, even gRpc socket if naked dockerd is used. It had been already configured in daemon.json config file before Docker is started. User has to think about available port early or actually never if Docker runs on Linux if Fd:// docker host is configured to start dockerd as systemd service. On the client side docker and podman exec use docker context or podman connection. Docker exec containerID communicates with container via allocated tty, i.e. no port to config. Even when Client and container run on the same host client has no chance to communicate with container using TCP /SSH directly bypassing Docker's network driver, bridge by default. Dockerfile can EXPOSE port 22 for SSH for all images. Later --publish can map container port 22 to the HostPort 220, etc. this is really dynamic. But terminology is publish SSH port on Local/Host/Overlay network name, bridge if ommited, as port 220.

PavelSosin-320 commented 3 years ago

Yet another parameter is required to make dev-container port publishing compatible with Podman: "Podman.usermode.rroot-full" : boolean. Podman is able to run containers without root privileges and still separate the execution environment of users, unlike Docker. But Podman networking for the root-full and rootless containers is different. It will be required to designate explicitly the Podman user mode. In other words, if "sudo -i" is necessary before starting to use local Podman or connect to remote Podman as a user with root privileges.

PavelSosin-320 commented 3 years ago

@chrmarti Testing locally installed on my laptop Podman+Buildah during already several months I still can't understand why Dev-container configuration and Infrastructure configuration must be squeezed into a single file. I see many issues where users cry " Why I need to edit my project when I'roaming from server to server, from local infrastructure to Cloud infrastructure. from Docker container runtime to another Docker container runtime, from the development environment to the testing environment? Every project can have more than one target. Every target has its own networking layout: end-points, available ports, and port ranges. The container running locally and on Azure infrastructure is exactly the same container. The workspace located on the Laptop and the workspace located inside Docker volume running on the Azure infrastructure is precisely the same workspace.

github-actions[bot] commented 3 years ago

Hey @chrmarti, this issue might need further attention.

@PavelSosin-320, you can help us out by closing this issue if the problem no longer exists, or adding more information.

github-actions[bot] commented 3 years ago

This issue has been closed automatically because it needs more information and has not had recent activity. See also our issue reporting guidelines.

Happy Coding!