This repository is a collection of tools that will build an entire platform within a microservices strategy. It uses a Jenkins build system that also serves as a central processing unit that will:
It constructs a (CI/CD) ephermal Jenkins instance and then it is placed into a dedicated network and returns docker build images into that network services. Then it spins up Kubernetes clusters to load-balance Jenkins through priortization, and finally operates on a system of dedicated worker machines that serve as the full-stack components of:
The that serve as our deployments within our Agnostic
playground. After a successful deployment is met, a kubernetes interface is introduced to assist with load balancing and managed at the: development, staging, and production level.
Tools Used | Description | Completed |
---|---|---|
MongoDB | Clexys.com | yes |
NodeJs | Runs backend and front end under NodeJS | yes |
AWS | Runs NodeJs Playground Using Custom Docker images | yes |
Linux | primary hosts for K8s Test Env | yes |
Docker | Name spaces to reduce and create lean memory spaces within linux VM Production Servers | yes |
Kubernetes | Pods that provide managed services to Businesses | Yes |
Network & Auth Services | LDAP, DHCP, Firewall, proxy, NAT, WINS, DNS, Forwarders / Reverse traffic | No |
For those of us that have a bad memories and juggle many projects, we sometimes miss the basics and forget to record good practices. This repo enforces a systematic approach to launching an entire infrastructure for enterprise
that is ready to test and deploy applications for:
There is a launch script within the launch folder, it will provide a menu driven system that allows you to select what your desired system should look like. It will give options like:
If you have little time and need to stand-up a quick reliable, repeatable, and efficient use of resoures within:
Because if we don't have our IaC we will spend all of our time building and have no time to actually create products that our present and future customers want. It is good to practice our complete disaster recovery skills as well as enrich our CI/CD so our skills stay fresh and our environment is easy to operate. However environments will get old and we will possibly need them to recover our life's work.
Not too mention people will probably make human errors Nothing is worse than getting a 2am call and having to perform a task... while half asleep.
This will create a universal system that uses Pipelines from Jenkins that can be transported to other cloud providers using the universal agonistic tools. It starts with AWS and will later integrate with other providers like:
Use of this repo constitutes your willingness and liability to read and make your own decisions. This repo is for educational experimental use only. Therefore all liability, or legal matters are forfeited by both the reader and writer for this walkthrough and all of its documentation provides no guarantee of services and/or content may change without notice. Neither AlcoTerra, Halfaway Services or any company that Corey Albright works with or for or afterwards constitues an agreement or liability to access such systems or any liability. For this tutorial the author assumes the use of a MacOS device. The authors preferred system of choice. It is recommended that the user find(s) an equivelent systems like Linux or have access to a terminal based system like Windows Subsystem for Linux. To proceed with the documentation.
[x] Create AWS auth keys
[x] Add User to Admin Group
[x] Install the AWS CLI
[x] Test your AWS CLI
[x] Setup access to create VPCs
[x] Test your access to create VPCs
[x] Give your account S3 permissions
[x] Use a Terraform configuration file to standup a production environment
[x] Test some Bash scripts first as we need to ensure Ansible has the needed resources
[x] Use Ansible to configure the previous environment that Terraform created
Because we all know the lifecycle of infrastructure is short and many derecation will happen after I write this repo in full. Therefore to save time, it is best to build a system that allows growth. But it is lengthy to login each time and perform an action via web/GUI. However in the event things get compromised, or we loose a system due to its integrity or reliability. You will need to be authenticated properly to manage AWS services.
- Within your first 12 months on AWS, you can use up to 5 GB/month on the EFS Standard storage class for free.
Assume your file system is located in the US East (N. Virginia) region, uses 100 GB of EFS Standard storage, and uses 400 GB of EFS Infrequent Access storage for the entirety of a 31 day month. At the end of the month, you would have the following usage in GB-Hours:
Total EFS Standard usage (GB-Hours): 100 GB x 31 days x (24 hours / day) = 74,400 GB-Hours Total EFS IA usage (GB-Hours): 400 GB x 31 days x (24 hours / day) = 297,600 GB-Hours We add up GB-Hours and convert to GB-Month to calculate monthly charges: Total EFS Standard storage charge: 74,400 GB-Hours x (1 month / 744 hours) x $0.30/GB-month = $30.00 Total EFS IA storage charge: 297,600 GB-Hours x (1 month / 744 hours) x $0.025/GB-month = $10.00 Total Monthly Storage Charge: $30.00 + $10.00 = $40.00 or $.08/GB-month
Tiered price for: 120 GB 120 GB x 0.0230000000 USD = 2.76 USD Total tier cost = 2.7600 USD (S3 Standard storage cost) 1,000 PUT requests for S3 Storage x 0.000005 USD per request = 0.005 USD (S3 Standard PUT requests cost) 1,000 GET requests in a month x 0.0000004 USD per request = 0.0004 USD (S3 Standard GET requests cost) 120 GB x 0.0007 USD = 0.084 USD (S3 select returned cost) 120 GB x 0.002 USD = 0.24 USD (S3 select scanned cost) 2.76 USD + 0.0004 USD + 0.005 USD + 0.084 USD + 0.24 USD = 3.09 USD (Total S3 Standard Storage, data requests, S3 select cost) S3 Standard cost (monthly): 3.09 USD
I pick the slower option since this is going to be a "cheap" sample env
This is the Command Line interface to the AWS services
This is great for versioning and helps you to track both changes and assist you in creating tests that can be staged and then later moved to production. The rules are simple, each time you make a change to production, you need to overwrite the test environment, and staging environment with new Git commits.
This eliminates the expensive and lengthy process to stand up environments and systems. Each time a system fails, or needs to be upgraded, or restarted, it will take a fraction of the time. However using a virtual machine, is great to host the docker service as it will sit inside of our EC2 instance.
Great for making a contract ... plan to spin up entire environments that can be used immediately and eliminates the need to manually setup resources. Simply looked up the provider and its syntax to implement resources from companies like docker, google, amazon, microsoft and more.
I have choosen to use Kubernetes as I like the way google manages containers, plus the terraform plug-in for docker references swarm which is a competitor of kubernets and although it may be a great application. I find that more companies are using Kubernets to get things done faster.
Docker with Terraform - Docker is a container platform market that enables developers and IT operations to build, secure and manage applications without technology or infrastructure lock in. By bringing together traditional applications and microservices built on Windows, Linux and mainframe under one operating model, Docker’s container platform enables the companies to accelerate key digital initiatives including cloud migration, application modernization and edge computing.
Is basically a launcher and a good manager for pipelines
It is difficult to know what your customers or bosses want without a plan, therefore all organized people should have a kanboard