Closed MikeTheCanuck closed 7 years ago
Let's start from Assignment 7 of the DevOps class:
ECS CLUSTER
and ECS_PROJECT
environment variables in env.sh
(or other mechanism)config-ecs-cli.sh
runs the command ecs-cli configure --region $AWS_REGION --access-key $AWS_ACCESS_KEY --secret-key $AWS_SECRET_KEY --cluster $ECS_CLUSTER
docker-push.sh
runs the command ecs-cli compose --project-name "$ECS_PROJECT --file ecs-deploy.yml service up
ecs-deploy.yml
is just a docker-compose.yml
file by another name, but a file unique from docker-compose.yml to be able to track ECS-only configuration options]According to the ecs-cli
docs, the ecs-cli configure
command just configures the basic settings necessary to communicate with the ECS service regarding the cluster of interest, and the ecs-cli compose
script defines the container as a persistent service and starts it.
IIUC, we can define a single cluster for all five Hack Oregon services, and then potentially deploy individual containers into the cluster using unique ecs-deploy.yml
files to define a unique task for each service.
Question 1: in ECS parlance, can multiple "projects" belong to a single cluster? Are "tasks" and "projects" equivalent, or can you define multiple "tasks" per "project"?
Question 2: is it necessary to explicitly call ecs-cli up
? If not, how/where do we specify --instance-type=t2.micro
? Is it only necessary to call that once (i.e. is it acceptable to do this manually), or does it have to be re-run every time a change is made to the cluster or its projects/tasks? [Note: it's probably a good idea to write scripts for every step of the build of the backend ECS deployment - not only to leave a trail for others, but also to enable others to rebuild the environment if something catastrophic fails - easier than diagnosing, just rebuild (assuming decent "build from scratch" instructions are documented) - and also to enable others to augment the backend "cluster" e.g. from one to two nodes.
Question 3: While deploying the first time is well-documented, what about updating a running task/project with a new image? How is that done? Does that take the app offline for the duration of startup? If we were running two instances in the cluster, would ECS automatically blue/green the deployment, so that while one is being upgraded, the other is being left alone (so that if someone's request went to the being-upgraded container, a reload would likely direct the user to the next one)? If not, how do we minimize the downtime for end users when at the same time, we're planning to automatic "push" anytime a commit to master results in a successful build?
Links I've been reading: https://aucouranton.com/2015/10/15/migrating-the-monolith-from-ec2-to-an-ecs-based-multi-service-docker-app/ https://github.com/prakhar1989/FoodTrucks/commit/33f5afcd85093b4229ff4c48caa1454ddfc9a512
As a stepping stone towards the ultimate CloudFormation architecture - and as a way to keep costs down during the initial development phase (e.g. the next 2-4 weeks say) - I propose we instantiate a single EC2 Container Service instance where we deploy all five projects' backend containers. One EC2 t2.micro should enable all the teams to:
This is one stepping stone towards the scale-out, redundant infrastructure we're aiming for, and will help us focus on nailing down: