This repository contains Terraform configurations to provision AWS infrastructure consisting of Elastic Beanstalk environment, a Lambda function for automatic deployments, Route 53 for DNS management and an S3 bucket for object storage.
Create a new working Directory and change into it:
mkdir <working_dir> && cd <working_dir>
Clone this repository:
git clone <https://github.com/michaelkedey/aws_beanstalk.git>
Change into the project repo:
cd <aws_beanstalk/src/infrastracture>
Run the format script to format all Terraform files:
./format_validate_all.sh
Review and customize the variables in .terraform.tfvars
file inside the `src/infrastracture/env/`** drectories, with your specific configuration details.
Review and customize the backend in backend.tfvars
file inside the `src/infrastracture/env/`** directories, with your specific configuration details.
Initialize Terraform:
terraform init -var-file=<"./env/**/.terraform.tfvars"> -backend-config=<"./env/**/backend.tfvars">
Plan Terraform:
terraform plan -var-file=<"./env/**/.terraform.tfvars">
Apply the Terraform configuration:
terraform apply -var-file=<"./env/**/.terraform.tfvars"> --auto-approve
Follow the prompts to confirm the changes.
Note: it may be necesary to know you have to run thw apply command if the first one fails.
Destroy the resources after testing:
terraform destroy -var-file=<"./env/**/.terraform.tfvars"> --auto-approve
vpc.tf
file under src/infrastracture/modules/vpc/vpc.tf
s3.tf
file under src/infrastracture/modules/s3/s3.tf
prod_env.tf
file under src/infrastracture/modules/beanstalk/prod/prod_env.tf
prefix
of code_ and suffix
of .zip.lambda_function.tf
file under src/infrastracture/modules/lambda/lambda.tf
An elastic load balancer, security group, target groups and listeners for the various services and ports has been provided for. The elastic load balancer distributes incoming traffic across multiple instances of your web server, such as ec2 instances as defined in the beanstalk environment. However, depending on your use case, you can associate this load balancer with the beanstalk, or allow beanstalk create it's own load balancer as has been configured.
lb.tf
file under src/infrastracture/modules/loadbalancer/lb.tf
route.tf
file under src/infrastracture/modules/route52/route.tf
.The CI/CD pipeline uses GitHub Actions defined in yaml files contained in the .github/workflows directory. There are three yaml files for the infrastructure CI/CD. One file for each stage of deployment i.e. development (dev_actions.yaml
file), staging (staging_actions.yaml
file) and production (prod_actions.yaml
file). The dev level is triggered by commit to branch main
with paths 'src/infrastructure/**'. The staging level deployment is triggered by a successful completion of the dev_actions.yaml
workflow. The production level is triggered manually and requires manual approval. This is to ensure that production level will be triggered after development and staging meet expectations.
The CI/CD application pipeline use GitHub Actions also defined in yaml files contained in the .github/workflows directory. There are three yaml files for the application CI/CD. One file for each stage of deployment i.e. development (dev_s3.yaml
file), staging (staging_s3.yaml
file) and production (prod_s3.yaml
file). The dev level is triggered by commit to branch s3_auto
with paths 'src/s3_auto_uploads/**'. The staging level deployment is triggered by a successful completion of the dev_s3.yaml
workflow. The production level is triggered manually and requires manual approval so as to ensure that production level will be triggered after development and staging meet expectations.
The yaml files contain commands that sync s3_auto_uploads
directory with the respective buckets of the various deployment stages.
The sync causes an upload into the S3 buckets which triggers the lambda if the objects so uploaded satisfy the prefix
and suffix
criteria. The triggered lambda copies the application update and creates and deploys an application version on Elastic Beanstalk.
Access your Elastic Beanstalk application using the provided environment URL or custom domain (if configured).
Monitor Lambda function execution and deployment logs in AWS CloudWatch.
Three destroy
yaml files are contained in the .github/workflows directory. These include dev_destroy.yaml
, staging_destroy.yaml
and prod_destroy.yaml
files. They require manual approval to trigger the destroy workflow for the respective deployment levels.
$ tree
.
|-- Manulife_Auto_IaC_Design_Architecture.png
|-- README.md
`-- src
|-- app_versions
| `-- LambdaWebApp2.zip
`-- infrastracture
|-- env
| |-- dev
| | `-- backend.tfvars
| |-- prod
| | `-- backend.tfvars
| `-- staging
| `-- backend.tfvars
|-- format_validate_all.sh
|-- locals.tf
|-- main.tf
|-- modules
| |-- app
| | |-- app.tf
| | |-- outputs.tf
| | `-- variables.tf
| |-- app_version
| | |-- app_version.tf
| | |-- outputs.tf
| | |-- store.tf
| | `-- variables.tf
| |-- beanstalk
| | `-- prod
| | |-- beanstalk-ec2-policy.json
| | |-- beanstalk-service-policy.json
| | |-- beanstalk.tf
| | |-- outputs.tf
| | `-- variables.tf
| |-- dir_upload
| | |-- uploads.tf
| | `-- variables.tf
| |-- file_upload
| | |-- outputs.tf
| | |-- uploads.tf
| | `-- variables.tf
| |-- route53
| | |-- output.tf
| | |-- route.tf
| | `-- variables.tf
| `-- vpc
| |-- outputs.tf
| |-- providers.tf
| |-- store.tf
| |-- variables.tf
| `-- vpc.tf
|-- outputs.tf
|-- providers.tf
|-- s3_uploads
| |-- dotnet-linux.zip
| `-- dotnet-linux2.zip
`-- variables.tf
17 directories, 39 files
Feel free to contribute to this project by opening issues or creating pull requests.