Open eistrati opened 8 years ago
In order to use the serverless approach, we must setup the following weighted traffic in Route53:
Limitations:
We have identified some use cases that we didn't cover before:
1) If blue-green ratio is 0:1, deepify publish
must swap environments and turn off Lambda@Edge
Notes
deepify publish
approach as above and turn off Lambda@EdgeHere below is my understanding of improved functionality:
Change CNAME of blue CF into wildcarded (www.deep.mg => .deep.mg). If CNAME already exists, stop with message "In order to avoid DNS lags and unexpected behavior, deepify publish requires wildcarded SSL (e.g. .mydomain.com) during blue-green deployment process. Please release it from other CloudFront distribution and try again."
Create new CF that points to blue S3 bucket, with CNAME of blue CF (e.g www.deep.mg). It will be running Lambda@Edge and returning HTTP 302 to blue CF or green CF based on traffic ration (e.g 1:9, 2:9, 1:3, 1:0, etc). Wait for status=deployed.
In worse case scenario, if Lambda@Edge doesn't return HTTP 302, catch the response and allow the request to pass through by returning the data from origin. From this point of view, new CF is the "clone" of blue CF.
Check green CF to have the right CNAME (www2.deep.mg). If not, prompt user to change CNAME (Y/n)?
Create (if exists, update) Route53 A Aliases of www to new CF, www1 to blue CF and www2 to green CF. If ratio is 1:0 or 0:1, keep www1 and www2 records (obviously, www as well) by changing to the CF that remains active.
If deepify doesn't have access to Route53 or another DNS provider, ask users to manually make DNS changes and re-run the command. Obviously, think through above functionality, to make sure it supports external DNS changes that will be done manually, but don't wait on those changes. Return the right informative message on how CNAMEs should look like and finish script's execution.
NOTE: Let's create separate parameter (e.g. --cleanup) that, if enabled, will clean up CF and Route53 resources. We should also mention somewhere and/or prompt an Y/N confirmation because using this parameter by default might create DNS lags or other unexpected behavior. It is HIGHLY recommended to use --cleanup only couple of hours/days after --ration 0:1 was executed.
Testing actions is blocked by the following issue : https://github.com/MitocGroup/deepify/issues/365
I would like to bring back an older conversation about blue-green deployments and implement it as
deepify
command(s):stage
toprod
(use case A) or gradually allowing increase in traffic between 2 differentprod
environments (use case B). For example: "blue env" vs "green env" => 90% vs 10%, then 80% vs 20%, ... and finally 0% vs 100%stage
and "green env" isprod
, while traffic is 0% vs 100%deepify publish --blue X --green Y --data-replicate true|false
- manage traffic between blue-green deployments. Complete list of parameters (required marked with star):blue
(e. g. abcd1234, stage:abcd1234, etc)green
(e.g. wxyz0987, prod:wxyz0987, etc)ratio
(e.g. 9:1, 4:1, etc)data-replicate
(will enable automatically data replication)domain-name*
(e.g. www.deep.mg, todo.deep.mg, etc)hosted zone ofdomain-name
MUST BE IN ROUTE53 (if it's not, operation should fail)valuesabcd1234
andwxyz0987
MUST BE DIFFERENT (if they are the same, operation should fail, unless it's used in [env]:[hash] format and environments are different)blue-percent
(e.g. 0, ..., 50, ..., 100; default - 0)green-percent
(e.g. 0, ..., 50, ..., 100; default - 100)if eitherblue-percent
orgreen-percent
is not specified, the other one MUST BE COMPUTED as difference between 100 and specified parameter valuesum ofblue-percent
andgreen-percent
MUST BE EQUAL TO 100 (if it's not, operation should fail)deepify replicate
- manage data replication between blue-green deployments. Complete list of commands and parameters (required marked with star):deepify replicate [command] --blue-env X --green-env Y --resources A,B,C
- filter the list of DynamoDB tables in "blue env"blue
(e. g. abcd1234, stage:abcd1234, etc)green
(e.g. wxyz0987, prod:wxyz0987, etc)tables
(e.g. list of tables, comma separated values)private-ignore
(e.g. ignore files in private S3 bucket)public-ignore
(e.g. ignore files in public S3 bucket)deepify replicate prepare
- enable streaming for each DynamoDB table in "blue env" and replicate older data using "eventual consistency" approach into DynamoDB table in "green env"; as well as upload corresponding Lambda function(s) that will be used to replicate each streamdeepify replicate status
- report status of the replication (e.g. -100% ... -1% => catching up replication in "prepare" phase; 0% ... 100% => catching up replication in "stream" phase)deepify replicate start
- attach Lambda function that will parse each DynamoDB stream in "blue env", replicate data into DynamoDB table in "green env" using "strong consistency" approach and remove data from the streamdeepify replicate stop
- detach Lambda function from corresponding (or all) DynamoDB stream in "blue env"deepify replicate terminate
- remove Lambda functions (doesn't matter if it's attached or not) and DynamoDB streams from "blue env"