Eximchain / terraform-aws-dappbot

Terraform infrastructure to run ABI Clerk
Other
0 stars 1 forks source link

Create Shared Infra, Per-Dapp Source Architecture #8

Open Lsquared13 opened 5 years ago

Lsquared13 commented 5 years ago

Overview

This issue describes how we could update our infrastructure using Lambda@Edge and S3 redirect rules to serve every dapp's source from one Bucket+Distribution, while still letting them have custom domain names. The final costs are similar to what we have projected at this point, as the amount of data stored in S3 and pushed through Cloudfront will not change. We will, however, be able to scale arbitrarily without worrying about running into AWS resource limits. This strategy is also the most straight-forward to implement given our current system, so we'll shoot for this first.

Request Flow

In this scenario, the user's dapp is found at name.dapp.bot..

When the user deploys, abi-clerk-lambda creates the CodePipeline which eventually outputs the static bundle, but instead of placing the bundle in a dedicated bucket, it places everything behind the name/ prefix. We may need to unpack the bundle ourselves.

When the user goes to name.dapp.bot, their request is routed to our Cloudfront distribution through a Lambda@Edge function. This function is able to transform the request. Assuming it can use the hostname, the function transforms name.dapp.bot into single-cloudfront.net/name. This request gets to the S3 bucket, returning the name/index.html object.

When somebody on the name.dapp.bot page tries to navigate to a route (e.g. name.dapp.bot/method), Cloudfront will transform the S3 request into name/method. This file does not exist in S3, and since each dapp has its own index.html, we can no longer rely on the error document. We instead use S3 redirects to say that any request prefixed with name/ which does not hit an object will instead return name/index.html.

Changes Required

  1. One shared S3 bucket and Cloudfront distribution for each dapp.
  2. Big abi-clerk-lambda changes. Each time we create a new Dapp:
    1. Create a Route 53 record pointed at the shared distro
    2. Update the distro to accept the new domain as an alias
    3. Update the S3 bucket to redirect KeyPrefixEquals: name and HTTPCodeMatches: 404 to name/index.html
  3. New Lambda@Edge function for transforming the requests.

Original Issue from Louis

The per-account limit for S3 buckets can only be raised to a maximum of 1,000. As long as our architecture involves an S3 bucket per-dapp, we can't scale past 1,000 Dapps.

We should be able to serve them all from the same bucket. We may need to hook up API Gateway to serve the right website, or write a Lambda function to serve the right page. Optionally removing Cloudfront from the workflow (and therefore pushing the SSL somewhere else) would be beneficial, since we will run into CloudFront limits too.

john-osullivan commented 5 years ago

Okay, we spent some time today thinking through and scoping out a prospective implementation. We actually ended up coming up with two different shared architectures -- one which would work with GitHub pages customization, another which would be cheaper but explicitly not support any customization.

I am going to update the description of this issue to describe the former implementation. The latter implementation would involve non-trivial adaptation of a sample Dappsmith output to make a generalized any-Contract dapp, so I will document it in a separate issue.

Lsquared13 commented 5 years ago

As written this is going to run into a CNAMES per distribution limit. Need to think about how we're going to move forwards with this