ā Exposing the port after starting the container, using native docker networking code. Seems not possible.
:x: Exposing the port to a random host-port, at start. Then tunneling it on the host.
:x: We wouldn't know which container port to expose
Map a random port to the container for communication, then map them both on the host, and within the container HOST:application port ⇄ HOST:$RANDOM ⇄ CONTAINER:$RANDOM ⇄ CONTAINER:application port
š Ugly solution (might be unreliable, slow, complex), but might work.
Pre-mapping
Detect which port will be needed, and then expose/map it before starting the container
š¤ This will be command specific, regardless of the solution, so not as scalable.
ā Detecting port parameters, i.e. mkdocs serve --dev-addr=0.0.0.0:8000 reveals that port 8000 should be opened.
Possible, with predefined (regex) patterns per command. For mkdocs, this could be --dev-addr=[^:]+:(?P<port>\d+)
Patterns can be stored as labels in the docker-compose file: com.dockerized.port
Detecting ports from configuration files, e.g. dev_addrļ within mkdocs.yml
Will work for simple single-file static configuration
Multi-file, environment dependent configuration (e.g. npm run serve --env=production using a different port)
Detecting ports from environment variables, e.g. some commands may natively use $PORT to determine the port to run on.
Two-phase mapping
Concept:
Run the command
Detect the opened ports
Allow user to re-run the command with the ports mapped
Variations
ā Opened ports will be remembered, and automatically mapped the next run.
Mapping can be stored in projects dockerized.env per command
:x: Prevents running the same command twice (requirement 2)
:x: Can be stored in a global cache, based on path (breaks 2)
Cache based on all command arguments and working directory
š¤ won't work for dynamic arguments, such as <command> foo $(date)
But, will work in many cases. Covering a bunch of them will be an improvement.
Opened ports are remembered, but the user needs to confirm somehow
Immediately kill the command, and re-run it with ports mapped
:x: Not so nice. Not all opened ports will be essential to the user, and sometimes a long build-process may happen before serving.
Description
As a user, I would like to run commands like
npm run serve
ormkdocs serve
without being required to manually map the port to the container.This way, the commands work more like native commands, and it makes dockerized much more convenient.
Requirements:
-p <PORT>
when runningdockerized
.npm run serve
andnpm run lint
, using the samenpm
command).Out of scope:
Possible solutions:
Predefine ports in built-in docker-compose.yml (e.g. always expose port 8080 for npm)
npm run serve
andnpm run lint
Automatically detect that a port was opened in the container, and map it afterwards
HOST:application port
⇄HOST:$RANDOM
⇄CONTAINER:$RANDOM
⇄CONTAINER:application port
mkdocs serve --dev-addr=0.0.0.0:8000
reveals that port 8000 should be opened.--dev-addr=[^:]+:(?P<port>\d+)
com.dockerized.port
mkdocs.yml
npm run serve --env=production
using a different port)$PORT
to determine the port to run on.dockerized.env
per command<command> foo $(date)