DoSomething / bertly

🔗 A serverless link shortener.
https://dosome.click/wq544
MIT License
2 stars 1 forks source link

Provision resources with CloudFormation. #33

Closed DFurnes closed 6 years ago

DFurnes commented 6 years ago

Right now we still provision resources manually (and don't have a VPC set up on the development organization, so we weren't able to hook up to the ElastiCache & RDS instances we'd made there). Ideally, we want CloudFormation to handle this for us so that we can feel confident that we have the same environment everywhere!

This pull request is a work-in-progress to see whether we can set that up without too much trouble, and by doing so hopefully get a functional development environment for #21. :v: The work so far takes some heavy inspiration from this Serverless example (found via the official examples repo).

DFurnes commented 6 years ago

For later reference, here's the properties available for customizing RDS and ElastiCache as CloudFormation resources.

DFurnes commented 6 years ago

When running this on dev (with npm run deploy:dev), I ran into the following error:

  An error occurred: ServerlessElasticacheSubnetGroup - API: elasticache:CreateCacheSubnetGroup User: arn:aws:iam::083405473309:user/serverless-admin-dev is not authorized to perform: elasticache:CreateCacheSubnetGroup.

For now, I'm going to manually attach AmazonElastiCacheFullAccess to the serverless-admin-dev user on our development organization (we can reduce privileges later).

DFurnes commented 6 years ago

Alright, progress! Had to specify an admin username & password for the RDS instance before it could provision, so I ended up creating those values in the SSM parameter store & referencing them in the serverless.yml. That allowed me to finish provisioning the stack on dev, with successfully created RDS & ElastiCache instances with their host & port environment variables attached to the function.

Currently looking into why the Lambda function is timing out while trying to talk to ElastiCache:


TimeoutError: Timeout connecting to server
...
File "/var/task/bertly.py", line 76, in get_key_for_url
DFurnes commented 6 years ago

I had forgotten to add the Lambda function to the VPC, which would explain it. Fixed in c8533d8, but still getting a timeout on Redis. Going to fetch some lunch and maybe inspiration will strike! 🍕

DFurnes commented 6 years ago

A ha! Had the wrong port hooked up (from the example, which uses Memcached).

DFurnes commented 6 years ago

Alright, up and running! 🚀 Last step is to run migrations on the new RDS instance.

DFurnes commented 6 years ago

I ended up adding a second function in order to run migrations. This feels wrong, but I can't seem to find any other particularly good ways of handling this (save with DynamoDB, oy! 🙃). For now, I've added a migrate function in 1b13484, which is automatically hooked up to the same environment and resources as the main Bertly function.

To run migrations:

$ aws lambda invoke --function-name bertly-dev-migrate --profile serverless-dev output.txt

This gives us a functional dev environment, with the infrastructure 100% in code! Neat! Here's the Lambda function (where you can snag the API key), and an example shortlink and click counter

DFurnes commented 6 years ago

This is ready for review! Next step is making some of the hard-coded dev strings read from the current stage so we can adapt this to qa and production stages, and then figuring out how we'd want to migrate from our current Compose Postgres & Redis to EC2-provisioned ones.

DFurnes commented 6 years ago

It'd also be worth investigating whether we should provision a VPC and all that jazz in a separate Serverless stack, and then import them so they could be shared among all our Lambda apps.

DFurnes commented 6 years ago

Added support for multiple stages in e4f74c0.

DFurnes commented 6 years ago

My understanding is that those are the resources' logical IDs, and are local to that particular stack template. Each resource then gets an auto-generated physical ID that wouldn't conflict (e.g. vpc-0ac3c9fa9cb3f644e for this stack's VPC... lovely).

I agree it makes sense to rename them, though, to be clearer that they're Bertly-specific!

DFurnes commented 6 years ago

Just to double check (and test out e4f74c0), I tried spinning up a separate Bertly QA stage. It made brand new infrastructure, so now we have two VPCs for example (although renaming the existing names to Bertly* did rebuild all the infrastructure for the dev stage, oh well):

$ aws ec2 describe-vpcs --profile serverless-dev --query 'Vpcs[*].[VpcId,Tags[?Key==`aws:cloudformation:stack-name`][Value]]'

vpc-bee7e6c5              None
vpc-0f05853c94cab2e32     bertly-dev
vpc-046a174786ea25a08     bertly-qa
mshmsh5000 commented 6 years ago

Very nice! Looks 💯 to me!