michft / best-practices

Mozilla Public License 2.0
0 stars 0 forks source link

tst #1

Open michft opened 8 years ago

michft commented 8 years ago

Deploy a Best Practices Infrastructure in AWS

This project will deploy an end to end infrastructure in AWS that includes the below resources in us-east-1.

Take all instructions from Setup forward and paste into a new "Issue" on your repository, this will allow you to check items off the list as they're completed and track your progress.

Note: Terraform creates real resources in AWS that cost money. Don't forget to destroy your PoC environment when finished to avoid unnecessary expenses.

Set Local Environment Variables

Set the below environment variables if you'll be using Packer or Terraform locally.

$ export AWS_ACCESS_KEY_ID=YOUR_AWS_ACCESS_KEY_ID
$ export AWS_SECRET_ACCESS_KEY=YOUR_AWS_SECRET_ACCESS_KEY
$ export AWS_DEFAULT_REGION=us-east-1
$ export ATLAS_USERNAME=YOUR_ORGANIZATION_NAME
$ export ATLAS_TOKEN=YOUR_ATLAS_TOKEN

Note: The environment variable ATLAS_USERNAME can be set to your individual username or your organization name in Atlas. Typically, this should be set to your organization name - e.g. hashicorp.

Generate Keys and Certs

There are certain resources in this project that require the use of keys and certs to validate identity, such as Terraform's remote-exec provisioners and TLS in Consul/Vault. For the sake of quicker & easier onboarding, we've created a gen_key.sh and gen_cert.sh script that can generate these for you.

Note: While using this for PoC purposes, these keys and certs should suffice. However, as you start to move your actual applications into this infrastructure, you'll likely want to replace these self-signed certs with certs that are signed by a CA and use keys that are created with your security principles in mind.

Use the New Build Configuration tool to create each new Build Configuration below. Enter the names provided as you go through the checklist and be sure to leave the Automatically build on version uploads and Connect build configuration to a GitHub repository boxes unchecked for each.

After creating each Build Configuration, there is some additional configuration you'll need to do. The summary of what will need to be completed for each Build Configuration is below, the relevant values are provided as you go through the checklist.

Add Environment Variables

You can then go to "Builds" in the left navigation of each of the Build Configuration(s) and click Queue build, this should create new artifact(s). You'll need to wait for the base artifact to be created before you queue any of the child builds as we take advantage of Base Artifact Variable Injection.

You do NOT want to queue builds for aws-us-east-1-ubuntu-nodejs because this Build Template will be used by the application. Queueing a build for aws-us-east-1-ubuntu-nodejs will fail with the error * Bad source 'app/': stat app/: no such file or directory.

Base Artifact

Wait until the Base Artifact has been created before moving on to the child Build Configurations. These will fail with an error of * A source_ami must be specified until the Base Artifact has been created and selected.

For child Build Configurations, there is one additional step you need to take. In "Settings", set Inject artifact ID during build to aws-us-east-1-ubuntu-base for each.

We built artifacts for the us-east-1 region in this walkthrough. If you'd like to add another region, follow the Multi-Region setup instructions below.

If you decide to update any of the artifact names, be sure those name changes are reflected in your terraform.tfvars file(s).

Deploy a us-east-1 Node.js Application

Upload new versions of the application by merging a commit into master from your forked repo. This will upload your latest app code and trigger a Packer build to create a new compiled application artifact.

If you don't have a change to make, you can force an application ingress into Atlas with an empty commit.

$ git commit --allow-empty -m "Force a change in Atlas"

If you want to create artifacts in other regions, complete these same steps but select a Build Template from the region you'd like.

Provision the aws-global Environment

This same process can be repeated for the aws-us-east-1-staging environment as well as any other regions you would like to deploy infrastructure into. If you are deploying into a new region, be sure you have Artifacts created for it by following the Multi-Region steps below.

Setup Vault

A HA Vault should have already been provisioned, but you'll need to initialize and unseal Vault to make it work. To do so, SSH into each of the newly provisioned Vault instances and follow the below instructions. The output from your apply in Atlas will tell you how to SSH into Vault.

After Vault is initialized and unsealed, update the below variable(s) and apply the changes. Next time you deploy your application, you should see the Vault/Consul Template integration working in your Node.js website!

You'll eventually want to configure Vault specific to your needs and setup appropriate ACLs.

Multi-Region

If you'd like to expand outside of us-east-1, there are a few changes you need to make. We'll use the region us-west-2 as an example of how to do this.

In the base.json Packer template...

Add a new variable for the new region's AMI and a new variable for the new Build name. Note that the AMI will need to be from the region you intend to use.

"us_west_2_ami":   "ami-8ee605bd",
"us_west_2_name":  "aws-us-west-2-ubuntu-base",

Add an additional builder for the new region

{
  "name":            "aws-us-west-2-ubuntu-base",
  "type":            "amazon-ebs",
  "access_key":      "{{user `aws_access_key`}}",
  "secret_key":      "{{user `aws_secret_key`}}",
  "region":          "us-west-2",
  "vpc_id":          "",
  "subnet_id":       "",
  "source_ami":      "{{user `us_west_2_ami`}}",
  "instance_type":   "t2.micro",
  "ssh_username":    "{{user `ssh_username`}}",
  "ssh_timeout":     "10m",
  "ami_name":        "{{user `us_west_2_name`}} {{timestamp}}",
  "ami_description": "{{user `us_west_2_name`}} AMI",
  "run_tags":        { "ami-create": "{{user `us_west_2_name`}}" },
  "tags":            { "ami": "{{user `us_west_2_name`}}" },
  "ssh_private_ip":  false,
  "associate_public_ip_address": true
}

Add an additional post-processor for the new region

{
  "type": "atlas",
  "only": ["aws-us-west-2-ubuntu-base"],
  "artifact": "{{user `atlas_username`}}/{{user `us_west_2_name`}}",
  "artifact_type": "amazon.image",
  "metadata": {
    "created_at": "{{timestamp}}"
  }
}

Once the updates to base.json have been completed and pushed to master (this should trigger a new Build Configuration to be sent to Atlas), complete the Child Artifact steps with the new region instead of us-east-1 to build new artifacts in that region.

To deploy these new artifacts...

In each of the new "us_west_2" terraform.tfvars files...

Finally, push these new environments to master and follow the same steps you completed to deploy your environments in us-east-1.

Terraform Destroy

If you want to destroy the environment, run the following command in the appropriate environment's directory

$ terraform destroy -var "atlas_token=$ATLAS_TOKEN" -var "atlas_username=$ATLAS_USERNAME"

There is currently an issue when destroying the aws_internet_gateway resource that requires you to run terraform destroy a second time as it fails the first.

Note: terraform destroy deletes real resources, it is important that you take extra precaution when using this command. Verify that you are in the correct environment, verify that you are using the correct keys, and set any extra configuration necessary to prevent someone from accidentally destroying infrastructure.

michft commented 8 years ago

dnsimple