Closed jsieber closed 7 years ago
Deploying the mobile clients simply requires doing a device build for each platform (detailed here): https://github.com/Tour-de-Force/btc-app/blob/master/README.md#device https://github.com/Tour-de-Force/btc-app/blob/master/README.md#device-1
Deploying the web client to AWS is also detailed here: https://github.com/Tour-de-Force/btc-app/blob/master/README.md#deploying-for-web
Deploying the server to AWS is detailed here: https://github.com/Tour-de-Force/btc-app-server/blob/master/README.md#updating-the-aws-server
Unfortunately, listing out each step to re-setup AWS would be a long task. I would have to figure out all the steps first. I'm pretty sure everything that doesn't exist as code is just in some AWS GUI.
I will address each of those items in your list next.
CloudFront Distributions - This is correct. It is only serving as the front end to the S3 bucket btc-app-web for SSL purposes. From scratch, you would just need to setup a new CloudFront endpoint and tell it to serve from some S3 bucket with the web files.
S3 buckets - You could probably ditch the bicycle-touring-companion
bucket. We didn't use it and it looks like an older version of btc-app-web
. You could probably consolidate btc-infrastructure
and btc-infrastructure-dev
and just tell the dev stack in OpsWorks to get its scripts at the new URL. We're currently not using the track-tile-packages
bucket, but this could be useful for offline tile work.
Amazon Route 53 - No, the SSL config should be all in CloudFront (for the client) and EC2 (for the load balancers)
RDS - See here for a discussion of the tile exploration: https://github.com/Tour-de-Force/btc-app/issues/222
VPC Configuration - I never needed to view or edit this.
Elastic Load Balancer - Correct, and you can view a more high-level version of this in OpsWorks. In general, OpsWorks is a good place to see an overview of the "app". The EC2 view can give details for the 2 balancers and allow port and SSL configuration.
Certificate Manager - Yes, this would have to be updated by April 7, 2018 and updated in CloudFront (for the web client) and EC2 (for the 2 load balancers)
Closing as answered and linking from Readme.
Before the end of the year, we would appreciate the following documentation. Some of this documentation may already exist and we just need to know where it is located, other parts probably still need to be created.
We need deployment guides for setting up the Amazon infrastructure from the Github project. I see the documentation listed here: https://github.com/Tour-de-Force/btc-infrastructure but it appears to be for running locally in a virtual machine. I'm hoping for documentation that would outline step by step how we could build out the necessary infrastructure from the existing projects in the Tour-de-Force Github account.
It appears that we have the following items running in AWS. Let me know if I'm missing any or included something we don't need.
CloudFront Distributions - I think this runs the public facing web view from Amazon S3. What are the steps for setting this up from scratch?
S3 buckets - It appears we have 7 buckets, each designated for different parts of the app. Do we still need them all?
Amazon Route 53 - It appears that we have 1 hosted zone handling DNS for the server, the Couch DB instance, and the website. Is there any SSL-related configuration located here?
RDS - We have a snapshot of the map-tile database. We are not using this at the moment, but any documentation about this for a future team would be great.
VPC Configuration - I see that there are 5 subnets, I'm guessing 1 for each possible instance, an Internet Gateway, a network ACL, an Elastic IP, a Route Table and 23 security Groups.
Elastic Load Balancer - Two Load Balancers are currently defined. It appears one is for the server and the other for the Couch DB instance.
Certificate Manager - I see we have a certificate setup for bicycletouringcompanion.com
Am I missing other items? A step by step guide of how we could start an entirely new environment within AWS would be fantastic. We can discuss further in the meeting tomorrow.