Closed blarghmatey closed 1 year ago
Re-opening because we still need to transition the actual production Heroku app.
At @pdpinch's suggestion @annagav tested Micromasters RC last week and says it looks good from her perspective. I'll coordinate with @shaidar to see what needs to happen to get our ducks in a row so we can migrate production.
Necessary steps to finish migration in production:
To help determine when to schedule the cutover @feoh will be doing a trial run of the dump/load of Postgres in the production environment so that we can collect timing information and identify any variance from the RC process.
Exact steps for migration with timings and rollback sequence:
@feoh will put the micromasters production app into maint. mode and trigger the database transfer.
heroku maintenance:on -a micromasters
cd ~/src/mit/heroku
pg_dump -x -O $HEROKU_DATABASE_URL > micromasters_heroku_prod_db_dump.sql
psql $RDS_DATABASE_URL < ./micromasters_heroku_prod_db_dump.sql
@shaidar will trigger the Salt proxy update:
salt proxy-micromasters-production state.sls heroku.update_heroku_config
@feoh will trigger a catch-up s3 sync and upon ack. of completion from @shaidur take the app out of maint. mode:
heroku maintenance:off -a micromasters
aws s3 sync s3://odl-micromasters-production s3://ol-micromasters-app-production
@pdpinch @feoh and @annagav will spot check and 'smoke test' micromasters production and declare success or need for rollback.
@shaidar will revert the Salt proxy configuration to prior values using saved data.
# include specific invocations to enact rollback here.
I would say that you can actually take it out of maintenance mode prior to finalizing the S3 sync
The database transfer and migration were successful. However, we are having problems transitioning the CDN to the new CloudFront distribution we created, so we're still using the old S3 bucket.
The team will reconvene tomorrow and troubleshoot this remaining issue, but as of now production is up and all tests look good.
All set as of yesterday evening! \o/
Post-work resource clean-up list:
## ci
Postgres:
heroku-postgresql (postgresql-rigid-71273) mini $5/month created
### S3:
2016-04-11 10:42:30 odl-micromasters-ci
## rc/qa
### Postgres:
heroku-postgresql (postgresql-amorphous-36035) basic $9/month created
### S3:
2016-04-11 10:42:42 odl-micromasters-rc
## Production
### Postgres
micromasters-db.cbnm7ajau6mi.us-east-1.rds.amazonaws.com
- Also read replica:
- micromasters-db-read-replica.cbnm7ajau6mi.us-east-1.rds.amazonaws.com
### S3
2016-04-11 10:42:51 odl-micromasters-production
User Story
As a platform engineer I want to make sure that the Micromasters application and its infrastructure resources are being managed in the same way as all of our other systems.
Description/Context
The Micromasters application was set up before our current usage of Pulumi and before we had established some of the patterns that led to the creation of the
apps
VPC. The result is that the RDS instance used in production was created in its own VPC and the Vault policies are managed via Salt code which is being actively deprecated. There are also associated S3 buckets that were created either manually or via Salt and are not being actively managed by Pulumi.Acceptance Criteria