blacklabelops / volumerize

Docker Volume Backups Multiple Backends
https://hub.docker.com/r/blacklabelops/volumerize/
MIT License
558 stars 77 forks source link

Restore named volum in an internal network #98

Closed finevine closed 2 years ago

finevine commented 2 years ago

Hello, I have manage to backup a media folder and /var/lib/postgresql/data With this docker command, I manage to restore correctly the media folder locally. But I'm in trouble populating my postgres DATABASE_URL

Restore command

Here is the command that I use and that restore correctly the media file:

docker run --rm \
    -v myprojectpreprodpgdb:/source/pgdb \
    -v /Users/vft/Documents/Code-local/myproject/media:/source/media \
    --network docker-compose_intern \
    -e "VOLUMERIZE_SOURCE=/source" \
    -e "VOLUMERIZE_TARGET=s3://s3.eu-west-3.amazonaws.com/myproject-sauv" \
    -e "AWS_ACCESS_KEY_ID=XXXX" \
    -e "AWS_SECRET_ACCESS_KEY=XXXX" \
    blacklabelops/volumerize restore

Then I connect to the container running postgresql and run connect myproject -u vft but:

myproject=# \dt
Did not find any relations.

My question: How to restore correctly a psql named volume?

Backup dockercompose

Here is the end of my docker-compose file:

  backup-to-bucket:
    image: blacklabelops/volumerize:1.6
    env_file:
      - ../../.env
    container_name: backup-to-bucket
    restart: always
    depends_on:
      - myprojectpreproddb
      - myprojectpreprodweb
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock  
      - /etc/timezone:/etc/timezone:ro          
      - volumerize-cache:/volumerize-cache
      - myprojectpreprodpgdb:/source/pgdb:ro
      - '../../media:/source/media:ro'
    environment:         
      - VOLUMERIZE_SOURCE=/source
      - VOLUMERIZE_CACHE=/volumerize-cache
      - VOLUMERIZE_TARGET=s3://s3.eu-west-3.amazonaws.com/${VOLUMERIZE_USER}
      - AWS_ACCESS_KEY_ID=${VOLUMERIZE_USER_ACCESS_KEY}
      - AWS_SECRET_ACCESS_KEY=${VOLUMERIZE_USER_SECRET_KEY}
      - TZ="Europe/Paris"
      - VOLUMERIZE_JOBBER_TIME=0 */15 * * * *
      - VOLUMERIZE_FULL_IF_OLDER_THAN=7D
      - JOB_NAME2=RemoveOldBackups
      - JOB_COMMAND2=/etc/volumerize/remove-older-than 7D --force
      - JOB_TIME2=0 0 * * * *
    networks:
      - intern
    labels:
      - traefik.enable=false
volumes:
  myprojectpreprodpgdb:
  volumerize-cache:
blacklabelops commented 2 years ago

From this repositories introduction text: This is not a tool that can clone and backup data from running databases. You should always stop all containers running on your data before doing backups. Always make sure you're not a victim of unexpected data corruption.

What your setup does not do:

What your setup will not do even following the manual:

You can stop the database and make a backup of it: Look for postgres specific advice on what folder to backup.

You can command postgres to dump the running database and then backup the dump: Go to postgres manuals on how to achieve this.

finevine commented 2 years ago

Thank you so much @blacklabelops ! I've added a cron service to my docker compose

  myprojectpreprodcron:
    build: 
      context: ../../
      dockerfile: Docker/Dockerfiles/Dockerfile.cron
    container_name: myprojectPreProdCron
    networks:
      - intern
    volumes:
      - myprojectpreproddb:/var/lib/postgresql/data
      - '../../db_backup:/db_backup'
    labels:
      - traefik.enable=false
    depends_on:
      - myprojectpreproddb

The cron task makes a pgdump of my running DB each day. This db_backup volume is added to my volumerize source!

Cheers