Open rija opened 3 months ago
The How to test
Section in this PR says deployment to live on Hot stand-by should be tested by navigating to https://alt-live.gigadb.org and check version in footer. However, the value for REMOTE_HOME_URL
Gitlab variable in upstream/alt-gigadb-website
doesn't match:
Project | Variable | Value | Environment |
---|---|---|---|
upstream/gigadb-website | REMOTE_HOME_URL | https://gigadb.org | live |
upstream/gigadb-website | REMOTE_HOME_URL | https://staging.gigadb.org | staging |
upstream/alt-gigadb-website | REMOTE_HOME_URL | https://alt.gigadb.host | live |
upstream/alt-gigadb-website | REMOTE_HOME_URL | https://alt-staging.gigadb.host | staging |
https://alt.gigadb.host is not reachable in a browser. I can see my v00-pli88-testing deployment on https://alt-staging.gigadb.host though.
Hi @pli888,
question: It looks like basic tools (wget, emacs, etc) are still installed by bastion playbook on line 30?
That's deliberate,
The distinction between bastion_playbook.yml
and data_cliapp_playbook.yml
is mostly based on whether the thing we want to install/do need a container image built by the Gitlab pipelines or not.
If yes, it goes in the latter, if no it goes in the former.
The basic tools don't need anything built by the pipeline, they are just "basic" linux tool (like fail2ban, Postgres-client, docker)
Another perspective is whether the thing we want to install is something that would be useful on any EC2 instance that we need to ssh into.
It feels to me like the basic tools would be useful on all our EC2 instances (and maybe we want to add the basic tools Ansible role to the files_playbook.yml to as we occasionally ssh to the files server )
Finally, the other reason to separate the bastion playbook into two playbook is bloat, but since the basic tools are all defined within a single Ansible role, they can never cause bloat.
Hi @pli888,
info: Need to create a github ticket to create a command to perform DNS records swap so that we don't have to do it manually.
Done. See #2041
Pull request for issue:
1957
1990
1904
1985
2009
This is a pull request for the following functionalities:
emacs
andwget
tmux
*.tar.bz2
archives by installingbzip2
andlbzip2
How to test?
Deployment to your AWS staging environment
After checking out this PR's branch, pushing it to your Gitlab pipeline, you can follow the "Setting up your Staging environment" section of the updated
docs/SETUP_PROVISIONING.md
document.Deploying to hot standby from Upstream's alt-gigadb-website pipeline
The hot standby infrastructure is already built, but you can test re-provisioning and deploying to it:
Follow the instructions specific to the
upstream/alt-gigadb-website
project indocs/sop/PROVISIONING_PRODUCTION.md
for the staging environment.Then follow the instructions in
docs/RELEASE_PROCESS.md
to create a fake release (choose a tag label that's obviously fake e.g:v00-rija-testing
), that's going to be deployed live only to theupstream/alt-gigadb-website
project, and not to our current production.Once that release has been deployed to staging, you can resume the instructions in
docs/sop/PROVISIONING_PRODUCTION.md
from the "Provisioning live for upstream/alt-gigadb-website" section.When guided to the blue/green deployment process, consider your fake release as a simple one (no infrastructure change and no database change): You can deploy to the Hot stand-by (currently the
upstream/alt-gigadb-website
) by following the instructions in "Deployment to a specific live environment" section fromdocs/sop/PROVISIONING_PRODUCTION.md
If everything goes well, you should be able to play with the new infrastructure:
When both pipelines are successful, navigate to the staging urls (check versions in footer, should match your fake release)
https://staging.gigadb.org
https://alt-staging.gigadb.org
Test you can connect to the bastion server for both staging environments with the centos user using the SSH key from the first two steps:
centos@bastion-stg.gigadb.host
centos@bastion.alt-staging.gigadb.host
Test the deployment to live on Hot stand-by
https://alt-live.gigadb.org
and check version in footercentos@bastion.alt-live.gigadb.host
users_playbook.yml
for the exact same username you already have onupstream/gigadb-website
, you should then notice that you can ssh with that user using the same private key (because the public key is already in Gitlab variables)bastion.alt-live.gigadb.host
, you can access/share/dropbox
, and its content should be the same as what's onupstream/gigadb-website
's EFS.centos
user, execcrontab -l
and notice that:upstream/gigadb-website
blue/green deployment switchover
The last part of
docs/sop/DEPLOYING_TO_PRODUCTION.md
described the proposed plan for doing the blue/green deployment. Feel free to comment.Changes to composer.json
composer.json
is now a regular file, that's versioned and manually editable, so that automated dependencies security checks can be performed. After checking out this branch, you should be able to execute./up.sh
as usual and everything should work as usual.Addition of more basic tools
The new
ops/infrastructure/data_cliapp_playbook.yml
playbook will install:On your AWS deployment of this branch (from the first section of the "how to test?") , you can check they work by executing the commands below in order on a bastion server:
C-x C-c
to exit emacsC-d
to exit tmuxC-d
to log off SSHHow have functionalities been implemented?
Blue/green deployment:
See the "Upstream projects" section of
docs/sop/DEPLOYING_TO_PRODUCTION.md
Fixing the circular dependency issue:
Move all the bastion playbooks tasks that depend on the building and pulling of docker containers by the Gitlab pipeline into a new playbook
ops/infrastructure/data_cliapp_playbook.yml
which, unlike the other host configuration playbooks, is to be executed after running the Gitlab pipeline.Any issues with implementation?
N/a
Any changes to automated tests?
N/a
Any changes to documentation?
docs/sop/AWS_SETUP.md
is replaced and augmented withdocs/sop/DEPLOYING_TO_PRODUCTION.md
docs/SETUP_PROVISIONING.md
is updated to take into account the breaking down of the bastion playbook into two distinct playbookdocs/RELEASE_PROCESS.md
was moved todocs/sop/RELEASE_PROCESS.md
and updated to reflect current practice and blue/green deployment changesdocs/sop/PRODUCTION_DEPLOY.md
was renameddocs/sop/PROVISIONING_PRODUCTION.md
and updated to integrate two parallel infrastructures and blue/green deployment approachAny technical debt repayment?
N/a
Any improvements to CI/CD pipeline?
The
ops/infrastructure/bastion_playbook.yml
was broken up for two reasons:ops/infrastructure/data_cliapp_playbook.yml
docs/SETUP_PROVISIONING.md
can be performed in order without ambiguity and clear boundaries.