dwyl / learn-postgresql

🐘 Learn how to use PostgreSQL and Structured Query Language (SQL) to store and query your relational data. 🔍
212 stars 23 forks source link

Create a Postgres container and run a Phoenix-Ecto migration #104

Open ndrean opened 1 year ago

ndrean commented 1 year ago

You want to use a Docker image of a PostgreSQL server for your Phoenix app. How do you run your migrations in this container when it is not open?

You use docker-compose to run a container from an image in the Docker hub. You bind the folder "/var/lib/postgresql/data" used by the Postgres container to store the data to a folder created by Docker on your host under the name "pg-data" (you can list docker volume ls to check). The web app and the db server will communicate over TCP in a defined network generated by Docker ("mynet" here).

networks:
  - mynet
services:
  phoenix:
    [...]
  db:
    image: postgres:15.3-bullseye
   networks:
     - mynet
    env_file:
      - .env-docker
    restart: always
    networks:
    volumes:
      - pg-data:/var/lib/postgresql/data
    ports:
      - "5432"

The Postgres container can or not be opened, ie communicate with the locahost. This is done by setting a second port '5432', like this:

services:
  db:
    ports:
      - "5432:5432"

When it is open, you can run commands on this container, such as mix ecto.create && mix ecto.migrate to create the tables in your local Postgres database. Also psql commands "from outside" the container (if you have Postgres installed on your host), with psql -h=localhost - u=postgres --db=my_app_prod.

when you do docker-compose run db bash, you execute "bash" in the container and this gives you access to a shell into the container. You are in the container and you can run psql -h=db -u=psotgres --dbname=my_app_prod.

When the container starts, it will create (the first time, and from the env. variables file below) the database and superuser for you.

Note that you add DATABASE_URL for Ecto in the Phoenix app whilst the POSTGRES_* are for the Postgres container.

#env-docker
POSTGRES_PASSWORD=postgres
POSTGRES_USER=postgres
POSTGRES_DB=my_app_prod
DATABASE_URL=ecto://postgres:postgres@db:5432/my_app_prod
PGDATA=var/lib/postgresql/data

Note that we used db as the service name for Postgres and not localhost. This was used with the "internal" psql session.

One way to run the migrations on a closed container is to inject an SQL query to create these tables when the Postgres container starts, into the "/docker-entrypoint-initdb.d/" folder. It will run the first time, and this is all we want. It can be painful to translate the migrations files written in the Phoenix app into PostgreSQL commands. To overcome this, we can output the SQL commands generated by the Ecto/Phoenix migration into a file, just like this:

mix ecto.migrate --log-migrations-sql > ./init.sql

Then you clean and bind this file (on your computer) into the Postgres container:

services:
  db:
   volumes:
      - pg-data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro

Now the Postgres server container will run the migration this first time. It is enough to erase the "pg-data" volume to rerun the migration (docker volume rm my_app_pg-data).