Open webyneter opened 7 years ago
@japrogramer have you production-tested this?
#on a manager node
$docker secret create my_secret -
$ tee -
#mind you, my secrets are mounted into my containers from within their respective compose file
$systemctl stop docker.service
$tar czpvf swarm-dev.tar.gz /var/lib/docker/swarm/
than i rsync that file over an encrypted pipe back home and when i need to restore my secrets i do
$systemctl stop docker.service
$rm -fR /var/lib/docker/swarm
$tar xzpvf swarm-dev.tar.gz -C /var/lib/docker/swarm/
$systemctl start docker.service
$docker swarm init --force-new-cluster
$ tee -
#Now all my secrets are back, and available to my swarm. But only to those containers that have been given permission on a per secret basis.
@webyneter I have tested backing up the swarm state in production, some things to note tho
In order to capture the state of the swarm, the docker service must be stopped on a manager node with the services running. The reason we want to stop the daemon is so that no changes happen in the directory while we are making the backup of the swarm.
the tar czpfv command will capture the entire state of the swarm. Meaning that we can create versioned backups of the state of the swarm.
To restore to a previous state the docker daemon must not be running when we replace the swarm directory with our back up.
The --force-new-cluster, is nice because if other manager nodes are running when we tell our new manager to recreate the swarm state from our backup it will force the services to redeploy from the saved state automatically, so that clients experience very little down time. This is useful in other situations, like say a new deploy has some bugs, to go back to the previous deployment just restore the swarm state. This will work as long as all the appropriate resources are still available, images etc.
. I have more too say on this topic. Too address the docker yml files, using version: '3.2', I have restructured the layout to a base.yml, local.yml and a production.yml. Similar to python's requirements.txt file layout all the common settings are stored in base.yml and only new or different settings are placed in dev.yml or production.yml. I have accomplished this with the command:
# for dev
docker-compose -f base.yml -f dev.yml config > stack.yml
# for production
docker-compose -f base.yml -f production.yml config > stack.yml
and now too launch the app the command would be
docker stack deploy --compose-file=stack.yml website
Too give you an idea of how this yaml layout would look this is how my base.yml file looks
version: '3.2'
services:
postgres:
build: ./compose/postgres
environment:
- POSTGRES_USER_FILE=/run/secrets/pg_username
- POSTGRES_PASSWORD_FILE=/run/secrets/pg_password
secrets:
- pg_username
- pg_password
django:
command: /gunicorn.sh
environment:
- USE_DOCKER=$DAPI_VAR:-yes
- DATABASE_URL=postgres://{username}:{password}@postgres:5432/{username}
- SECRETS_FILE=/run/secrets/django_s
- POSTGRES_USER_FILE=/run/secrets/pg_username
- POSTGRES_PASSWORD_FILE=/run/secrets/pg_password
# My Deploy
deploy:
replicas: 1
restart_policy:
condition: on-failure
secrets:
- pg_username
- pg_password
- django_s
secrets:
django_s:
external: True
pg_username:
external: True
pg_password:
external: True
and this is how dev.yml looks like, note that
version: '3.2'
volumes:
postgres_data_dev: {}
postgres_backup_dev: {}
services:
postgres:
image: apple_postgres
volumes:
- postgres_data_dev:/var/lib/postgresql/data
- postgres_backup_dev:/backups
django:
image: apple_django
build:
context: .
dockerfile: ./compose/django/Dockerfile-dev
command: /start-dev.sh
volumes:
- .:/app
ports:
- "8000:8000"
secrets:
- pg_username
- pg_password
- source: django_s
#target: /app//.env
node:
image: apple_node
#user: $USER:-0
build:
context: .
dockerfile: ./compose/node/Dockerfile-dev
volumes:
- .:/app
- ${PWD}/gulpfile.js:/app/gulpfile.js
# http://jdlm.info/articles/2016/03/06/lessons-building-node-app-docker.html
- /app/node_modules
- /app/vendor
command: "gulp"
ports:
# BrowserSync port.
- "3000:3000"
# BrowserSync UI port.
- "3001:3001"
I would also like to point out that because depends_on is ignored in swarm deploy i don't use it.
Instead my containers listens in their entrypoint script for the container they depend on to become available. I do this with a simple ping service_name
Because I do all of my development with my images launched to a one node swarm, when i want to run tests or test coverage I go to my stack.yml file and change the line in the django service that reads command: /start-dev.sh to read either
command: pytest
# or
command: bash -c "coverage run manage.py test . && coverage report -m"
and every time i want to run a test I only have to run this command, I have an alias for this.
docker stack deploy --compose-file=stack.yml website
and in a split terminal window i have this command runnig
docker service logs -f website_django
Also whenever i want to rebuild a specific image for a service I do something along this lines.
docker-compose -f stack.yml build --no-cache django
Here is how I read in my secrets that i structure in a json format in my config/settings/base.py
with open(env('SECRETS_FILE')) as sec:
Secrets = json.loads(sec.read())
Than for example.
ACCOUNT_ALLOW_REGISTRATION = Secrets.get('DJANGO_ACCOUNT_ALLOW_REGISTRATION', True)
Big news, and proof of vulnerability, Recently multiple packages where caught stealing environment variables. https://iamakulov.com/notes/npm-malicious-packages/
I don't think this should be the default for Cookiecutter Django because it adds complexity and another thing you have to care about.
This is an advanced topic we should mention in the docs for people to take a look at and maybe add a couple of examples.
@jayfk I agree. Let's leave the issue open for further elaboration. I want to explore this one after I'm done with my ongoing commitments to #1052 and #1205.
More proof of vulnerability of stolen environment variables and more
https://www.reddit.com/r/linux/comments/709a4t/pypi_compromised_by_fake_software_packages/
direct link
http://www.nbu.gov.sk/skcsirt-sa-20170909-pypi/
I have a question, is it really popular to have django as something other than an api backend? Why not split the frontend to a different docker service and leave django in the back.
I have a question, is it really popular to have django as something other than an api backend?
I have an answer: yes.
The only problem I see with using secrets instead of an env file, is ... can you even use docker secrets without swarm? https://serverfault.com/questions/871090/how-to-use-docker-secrets-without-a-swarm-cluster says you can't. Also, what if someone wanted to use kubernetes instead of swarm? I'm also running into the issue of how to correctly pass environment variables to Travis. Should there be .local .production .unittest ? Should local be part of the GH repo?
@global2alex I made .envs/.local/*
commitable to VCS for local environment reproducibility: local envs, to my mind, should be no secret to your fellow teammates.
@webyneter Yeah I was thinking that too, I can't find a reason why it wouldn't be ok to share the local envs in a git repo, as long as the production ones are different. Thank you!
- DATABASE_URL=postgres://{username}:{password}@postgres:5432/{username}
@japrogramer
Hi, thanks to you I find this open issue.
I see your docker-compose.yml
use Docker Secrets for the postgres and django's services but leave the DATABASE_URL
environment variable unencrypted.
Big news, and proof of vulnerability, Recently multiple packages where caught stealing environment variables. https://iamakulov.com/notes/npm-malicious-packages/
Do you find a way to use DATABASE_URL
environment with Docker Secrets?
What am I thinking right now are:
DATABASE_URL=postgres://"/run/secrets/dbusername":"/run/secrets/dbpass"@postgres:5432/"/run/secrets/dbname"
Thank you.
Compose now supports secrets too and I think it would be a whole lot better to use secrets for things like AWS Access Keys and Database passwords etc.
This is based on the proposal by @japrogramer:
To my mind, there are a few things worth mentioning here.
local.yml
andproduction.yml
Docker Compose configs would need to be upgraded toversion: 3.x
:extends
for example as implied by Version 3 < 3.3 and Version 3.3 specs);cookiecutter-django
-related remote environments (Travis CI );cookiecutter-django
community of this change, documenting minimal local host environment Docker stack version matrix.*.env*
files should Docker be the client's environment of preference;@jayfk @luzfcb @japrogramer what are your thoughts?