Closed infojunkie closed 6 years ago
And consider removing the Potato reference :joy:
I am checking if we can create a dev
version similar to 3-0-XX
we are creating, and connect this to github repo. Or we can create a separate project for dev
environment as per https://cloud.google.com/appengine/docs/standard/python/creating-separate-dev-environments
@infojunkie I need permission to check the following in Build Triggers
in GCP
What's missing here?
I created a new database for the QA environment: it's called v2qa
(since the live database is v2
). I had to create a new SSL certificate to be able to access the database, and place the *.pem
files at keys/qa
. I also had to temporarily authorize my IP in order to test the migration.
I'm already able to deploy from inside the Docker container. When docker-compose build
runs, it installs gcloud
and all its dependencies and makes it available on PATH.
I renamed all prod
and staging
references by live
and qa
respectively. Settings file at appengine/src/greenday_core/settings/{qa,live}.yml
are ignored by Git. Those files are used by the deployment task.
As per #10 we are now using gcloud
to perform headless authentication. It reads a file at appengine/auth.json
. More details on that at #10 .
The deployment can be done in three steps:
npm run authenticate
: reads from appengine/auth.json
and authenticates the user (I created a services account to be used by the Jenkins deployments but locally we can use our own personal accounts)npm run deploy-qa
: this where I replaced the old appcfg
calls by gcloud
calls... it compiles and packs the code, uploads it to Google Cloud as a fixed version qa
and makes sure that it's not promoted (otherwise it would become the live version)npm run migrate-qa
: runs any pending migration - requires that the *.pem
files to access the database are placed at keys/qa
and that the origin IP or network is authorized to access the database. The host, user and password to the database are read from the qa.py
settings file.All the appengine/*.yaml
files were filtered: I removed the application
and version
keys, which are now passed as command line arguments by the grunt
tasks. For QA, the version is a hard-coded string qa
and for live it's a dynamic incrementing number read from a VERSION
file.
On Cloudflare, I added a CNAME domain montage-qa.meedan.com that redirects to the QA version and a Page Rule to redirect montage-qa.meedan.com
to qa-dot-greenday-project.appspot.com
.
All references to the :potato: environment were removed.
The documentation on README was updated accordingly.
Now I'm going to the next step: add the configuration files to dockerfiles
, add dama
's IP to the authorized networks of the v2qa
database, integrate with Jenkins and deploy to QA automatically when we push to develop
.
Done!
I created a new Jenkins job called montage-develop
... it downloads the Montage repository, uses our Dockerfile
, creates a Docker image, pushes to our registry, builds a container, runs npm run full-deploy-qa
inside the container and then exits.
The npm run full-deploy-qa
task is the one that authenticates on Google Cloud, deploys new code and runs migrations if needed. More details on #10 .
The configuration files (MySQL PEM keys, JSON keys to access Google Cloud and the Django settings file) are mounted as volumes in the Docker container and come from dockerfiles
.
I added dama
IP as a network allowed to access the QA MySQL database from outside.
The Jenkins job is built automatically when code is pushed to the develop
branch or when we call @hu deploy-montage-qa
. It also notifies on Slack.
Documentation was updated on README.
Just to document, there was a tricky problem happening: ERROR: gcloud crashed (CertificateError): hostname 'metadata.google.internal' doesn't match 'vpn.fatmac.co.uk'
. This was happening because gcloud
tries to resolve metadata.google.internet
in order to determine if it's running inside Google Cloud or not... in our case, it's resolving to something weird from inside a container in dama
, whose DNS solves any *.internal
domain. The solution was to set a different DNS resolver in the Docker container. I added --dns=8.8.8.8 --dns-search=.
and this way the container didn't use the DNS settings of the host machine.
The bit about gcloud
certificate error deserves a small post.
The bit about gcloud certificate error deserves a small post.
Yes! Done: https://code.meedan.com/2018/08/21/docker-dns.html
Works fine.
Did the same for live deployment. We have a Jenkins job montage-master
that is not built automatically when we push to master
. We can built it directly on Jenkins or from Slack by calling @hu deploy-montage-live
.
Now that we can deploy a new production version, let's make sure we can do the same for the dev stage.
Ideally, we would want to have an automated dev deployment any time a commit is made to
develop
.