You want to use a Docker image of a PostgreSQL server for your Phoenix app. How do you run your migrations in this container when it is not open?
You use docker-compose to run a container from an image in the Docker hub. You bind the folder "/var/lib/postgresql/data" used by the Postgres container to store the data to a folder created by Docker on your host under the name "pg-data" (you can list docker volume ls to check). The web app and the db server will communicate over TCP in a defined network generated by Docker ("mynet" here).
The Postgres container can or not be opened, ie communicate with the locahost. This is done by setting a second port '5432', like this:
services:
db:
ports:
- "5432:5432"
When it is open, you can run commands on this container, such as mix ecto.create && mix ecto.migrate to create the tables in your local Postgres database. Also psql commands "from outside" the container (if you have Postgres installed on your host), with psql -h=localhost - u=postgres --db=my_app_prod.
when you do docker-compose run db bash, you execute "bash" in the container and this gives you access to a shell into the container. You are in the container and you can run psql -h=db -u=psotgres --dbname=my_app_prod.
When the container starts, it will create (the first time, and from the env. variables file below) the database and superuser for you.
Note that you add DATABASE_URL for Ecto in the Phoenix app whilst the POSTGRES_* are for the Postgres container.
Note that we used db as the service name for Postgres and not localhost. This was used with the "internal" psql session.
One way to run the migrations on a closed container is to inject an SQL query to create these tables when the Postgres container starts, into the "/docker-entrypoint-initdb.d/" folder. It will run the first time, and this is all we want. It can be painful to translate the migrations files written in the Phoenix app into PostgreSQL commands. To overcome this, we can output the SQL commands generated by the Ecto/Phoenix migration into a file, just like this:
Now the Postgres server container will run the migration this first time. It is enough to erase the "pg-data" volume to rerun the migration (docker volume rm my_app_pg-data).
You want to use a Docker image of a PostgreSQL server for your Phoenix app. How do you run your migrations in this container when it is not open?
You use
docker-compose
to run a container from an image in the Docker hub. You bind the folder "/var/lib/postgresql/data" used by the Postgres container to store the data to a folder created by Docker on your host under the name "pg-data" (you can listdocker volume ls
to check). The web app and the db server will communicate over TCP in a defined network generated by Docker ("mynet" here).The Postgres container can or not be opened, ie communicate with the locahost. This is done by setting a second port '5432', like this:
When it is open, you can run commands on this container, such as
mix ecto.create && mix ecto.migrate
to create the tables in your local Postgres database. Alsopsql
commands "from outside" the container (if you have Postgres installed on your host), withpsql -h=localhost - u=postgres --db=my_app_prod
.When the container starts, it will create (the first time, and from the env. variables file below) the database and superuser for you.
One way to run the migrations on a closed container is to inject an SQL query to create these tables when the Postgres container starts, into the "/docker-entrypoint-initdb.d/" folder. It will run the first time, and this is all we want. It can be painful to translate the migrations files written in the Phoenix app into PostgreSQL commands. To overcome this, we can output the SQL commands generated by the Ecto/Phoenix migration into a file, just like this:
Then you clean and bind this file (on your computer) into the Postgres container:
Now the Postgres server container will run the migration this first time. It is enough to erase the "pg-data" volume to rerun the migration (
docker volume rm my_app_pg-data
).