Closed dubo-dubon-duponey closed 4 years ago
I think there's too much voodoo and complexity in your Dockerfile. There should probably have been a Dockerfile and a docker-compose.yml that automates the build. A bash script that execs into a container is kind-of an anti-pattern too.
@claude-leveille (<- un nom français? :-)) certainly - this was not meant to build - it's meant to develop (in that context, restarting a container every time a single file is changed is awful).
Either way, that's fine, just close this if this is not helpful for the project :-).
@claude-leveille (<- un nom français? :-)) certainly - this was not meant to build - it's meant to develop (in that context, restarting a container every time a single file is changed is awful).
Either way, that's fine, just close this if this is not helpful for the project :-).
I completely understand that you're not shipping what comes out of the Dockerfile, I'm simply saying that your bash script implements a lot of what the docker-compose run
sub-command does (similar to docker run
, but the cmd-opts are in the docker-compose.yml, which is better than bash). Compose also allows you to start to services that the app depends on with inter-service dependencies. Take the following example:
version: '2'
services:
shell:
build: dckr/Dockerfile
command: bash
volumes:
- .:/app
working_dir: /app
ports:
- 8080:8080
depends_on:
- db
- redis
redis:
...
db:
...
When you docker-compose run --rm shell
, you get dropped in a bash shell with the repo as your working directory. Your db and redis are also automatically started available at $PROTO://db:$PORT and $PROTO://redis:$PORT, respectively. This is also cross-platform, your bash script works on macOS and MAYBE Linux, definitely not stock Windows, even with the WSL.
As for the closing of your PR, I don't have any such access to this repo. I think this is a good idea and it's something I've implemented with the tools I mentioned. It's actually very impressive that you've been able to produce that Dockerfile, but remember that most devs are not at all looking to go full on build engineer when they start hacking away at user stories. A simpler (fewer lines, less imperative code like bash) implementation is always a better thing here.
Et oui, mon nom est francais. Je suis du Québec :smile:
Closing for lack of interest.
Thanks a lot for Proton! Love the service, and love the client.
Here is a PR that aims at making it simple to use Docker to develop on WebClient.
TL;DR
./dckr/do COMMAND
Where
COMMAND
is whatever you would do if developing natively on the host.Typically,
./dckr/do yarn start
Details
The
./dckr/do
script is a convenience wrapper that build and start a container (if something has changed in the./dckr
folder, or if there is no running container), and then "passes" arguments to be exec-ed into it.The image being built has all node dependencies installed (as specified at build time).
This pattern allows for fast execution of commands, and live-work in a seamless way.
The current working directory is mounted inside the container (any change in the source tree is caught-up by webpack inside the container).
There is some dark voodoo to make sure that the
node_modules
subfolder is NOT shared between the host and container though - allowing one to easily switch from one to the other without having to worry about it.The server is exposed by default on
localhost:8080
- though the port may be changed by passing theDCKR_PORT
environment variable to the script. eg:DCKR_PORT=12345 ./dckr/do yarn start
All this has been tested on macOS, and should work on linux-es.
Limitations
./dckr/do
will be SLOW (since it will build and install all dependencies - about 5 minutes here), but the subsequent runs should be very fastOverhead
Typical yarn call (with hot cache) on the host.
Typical yarn call (with hot cache) in the container
The cost of exec-ing in itself is about half a second (last year macbook pro):
The rest is slower I/O.