Open alnutile opened 7 years ago
Hi!
So, this does still work! Either way is fine, as it doesn't matter where in the container we save the files - neither are out of date.
Here's why:
In the first example there, we're saying files on my host machine (Mac) at $(pwd)/application
into the /opt
directory in a new container (we're spinning up a new container with docker run
, not re-using an already-running container).
We then set the working directory to /opt
to whatever command we run is run relative to that directory.
So, within the container, /opt
has the contents of my host machine's application
directory. I then run the command within the /opt
directory, where the files are.
In the second example, we're sharing the contents within the $(pwd)/application
directory on the host machine yet again, except time this we put it into /var/www/html
within the container, instead of /opt
. The working directory of the container is set the /var/www/html
directory so whatever command we run will be relative to that directory, which contains our application code.
The second example is more consistent with the docker-compose.yml
file, but it doesn't matter, since both of these commands are spinning up new containers and can therefore put files anywhere it wants within the container.
I hope that makes sense! There's a lot of wrap your head around. It's sort of weird to think about containers as little processes were creating and destroying all the time - it's like when you realize that git branches are just pointers to a commit - they're cheap and easy to create and destroy. Analogously (I guess...might be a bad analogy), containers allow us to run one-off commands and the exit/destroy the container when done. We can decide to put the files anywhere we want within the container.
Note that the containers we're spinning up here to run these one-off commands are not the same ones that run the application code for use in a browser (altho they are based off of the same image).
We're just creating an additional container, running a job, and killing that container when done.
Let me know if I can get into any detail there, I'm not 100% sure I'm clarifying the part you may be confused on.
Note also that a simpler way to run these commands, where you can worry less about the file paths (altho you have to worry about little bit) is to use docker-compose
for that.
In that case:
docker run -it --rm \
-v $(pwd)/application:/opt \
-w /opt \
--network=phpapp_appnet \
shippingdocker/php \
php artisan make:auth
Would become:
docker-compose run --rm \
-w /var/www/html \
php \
php artisan make:auth
Where in:
docker-compose run --rm
- Run a new container, remove when done. The -it
flags are assumed/automatic when run via docker-compose
-w /var/www/html
- Set the working directory to /var/www/html
. You do have to get this path right to match the directory shared in the container via the docker-compose.yml
config.php
- We use the service named php
within the docker-compose.yml
file, which spins up a container based off of the image defined for that servicephp artisan make:auth
- the command to runWindows 10, Version 17.12.0-ce-win47 (15139), Compose 1.18.0
This case didn't work for me with /opt
directory.
Laravel tried to write cache in /opt/application/storage
directory. But application
directory didn't exist in /opt
directory.
I've replaced /opt
by /var/www/html
in docker run commands and now it's working.
https://shippingdocker.com/docker-in-development/up-and-running/ references the path /opt
but the docker_compose file https://github.com/shipping-docker/php-app/blob/master/docker-compose.yml references
/var/www/html
so when I change the example commands to that it then works?
are one of these out of date or am I just missing the obvious :) thanks